forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
zvwD5wKQDUM8kw3Zin4N
Embedding Entity Pairs through Observed Relations for Knowledge Base Completion
[ "Dirk Weissenborn" ]
In this work we present a novel approach for the utilization of observed relations between entity pairs in the task of triple argument prediction. The approach is based on representing observations in a shared, continuous vector space of structured relations and text. Results on a recent benchmark dataset demonstrate that the new model is superior to existing sparse feature models. In combination with state-of-the-art models, we achieve substantial improvements when observed relations are available.
[ "observed relations", "entity pairs", "knowledge base completion", "work", "novel", "utilization", "task", "triple argument prediction", "observations", "continuous vector space" ]
https://openreview.net/pdf?id=zvwD5wKQDUM8kw3Zin4N
https://openreview.net/forum?id=zvwD5wKQDUM8kw3Zin4N
ICLR.cc/2016/workshop
2016
{ "note_id": [ "D1VXgNqE1f5jEJ1zfElz", "nx924YJApH7lP3z2iomE", "jZ9XGgwgZFnlBG2XfznM", "MwVkzK1Zziqxwkg1t7QP", "zvw0RZ8kptM8kw3ZinBL" ], "note_type": [ "review", "review", "comment", "comment", "review" ], "note_created": [ 1457649537129, 1457639079371, 1457688421734, 1457639659249, 1458177208183 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/83/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/83/reviewer/12" ], [ "~Dirk_Weissenborn1" ], [ "~Dirk_Weissenborn1" ], [ "ICLR.cc/2016/workshop/paper/83/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Clearly-written paper, but rather incremental\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper aims to develop better models for knowledge base completion.\\n\\nIt proposes to represent relations in vector space, and use the dot product of the vectors of two relation r and r' to capture their similarity which are both observed for an entity pair (s, o). This model is directly compared with model N of (Riedel et al, 2013), which uses one single parameter for each relation pair (r, r').\", \"the_authors_also_add_two_weighting_schemes_in_scoring\": [\"either uniform or a softmax function over all dot product scores.\", \"Experiments on FB15k-237 show that their model outperforms model N, either as a single model, or combined with two other latent feature models (D and E).\", \"In general, I think this paper is clearly-written, but the model is kind of too incremental - substituting a parameter w_{r, r'} with the inner product of two vecotrs is a natural and intuitive idea.\", \"I don't totally understand that why selective weighting approach makes sense - since the scoring function is already a sum of scores, why do we need to add another weight proportional to that score?\", \"The notions of \\\"latent feature models\\\" and \\\"observed feature models\\\" seem vague to me.\", \"Table 1: people usually don't use \\\\odot to represent \\\"dot product\\\".\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A new method that demonstrates improvements in predicting relationships within knowledge databases.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors introduce a new method for embedding latent features in a model for automatically predicting triplet relationships within knowledge databases. The authors demonstrate substantial improvement in terms of the MRR (mean reciprocal rank) and Hits@10 on the FB15k-237 corpus.\", \"pros\": \"The gains appear to be substantial on the single data set they evaluated.\", \"cons\": \"The authors could provide more details about the model and training methods. Additionally, the authors could provide experiments across additional corpora to demonstrate robustness in their results.\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Response to comments of reviewer 10\", \"comment\": \"We thank reviewer 10 for the review, and want to clarify one of his/her questions.\\n\\n\\\"I don't totally understand that why selective weighting approach makes sense - since the scoring function is already a sum of scores, why do we need to add another weight proportional to that score?\\\"\\n\\nGiven a triple (r,s,o), in cases where (s,o) co-occur in many textual mentions that are noise or have nothing to do with r, the triple might receive a low score using a uniform weighting approach, even if there is one very indicative textual mention for r. That is why we believe a weighting that selects only the maximum score (or a soft version via softmax weighting) should be useful.\"}", "{\"title\": \"Response to comments of reviewer 12\", \"comment\": \"Thank you very much for your review.\\nUnfortunately, we could not include additional details about training and model because of the very limited number of allowed pages.\"}", "{\"title\": \"Good writing but not fully convincing\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper propose an approach to utilize relation embedding for automatic knowledge base completion. The novel part of this approach is to use a normalized weight for the similarity between two relation embedding. This work can be viewed as a further extension of (Riedel et al., 2013).\\n\\nThere is one part in the experiment, which I think is confusing. In the column t of both MRR and HITS@10, why adding O_s to D+E actually hurts the performance comparing to D+E+O_u? Based on my understanding, Model O_s is expected to work better than Model O_u. Since utilizing embeddings is the claimed contribution of this paper, I think it definitely needs further investigation, other than simply said it is caused by overfitting.\\n\\nOverall, I think the contribution of this paper is incremental and the experiment is not fully convincing about the benefit of using relation embedding\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
xnrAg7jmLF1m7RyVi3vG
Learning Genomic Representations to Predict Clinical Outcomes in Cancer
[ "Safoora Yousefi", "Congzheng Song", "Nelson Nauata", "Lee Cooper" ]
Genomics are rapidly transforming medical practice and basic biomedical research, providing insights into disease mechanisms and improving therapeutic strategies, particularly in cancer. The ability to predict the future course of a patient's disease from high-dimensional genomic profiling will be essential in realizing the promise of genomic medicine, but presents significant challenges for state-of-the-art survival analysis methods. In this abstract we present an investigation in learning genomic representations with neural networks to predict patient survival in cancer. We demonstrate the advantages of this approach over existing survival analysis methods using brain tumor data.
[ "genomic representations", "cancer", "clinical outcomes", "survival analysis methods", "cancer genomics", "medical practice", "basic biomedical research", "insights", "disease mechanisms", "therapeutic strategies" ]
https://openreview.net/pdf?id=xnrAg7jmLF1m7RyVi3vG
https://openreview.net/forum?id=xnrAg7jmLF1m7RyVi3vG
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAVwDZO3Ksx0Wk76TAJV", "lx9AwomGZT2OVPy8CvVK", "p8jrZkZlJUnQVOGWfpkp" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457628319467, 1458139204671, 1458373431105 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/190/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/190/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/190/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"This paper proposes a method for survival prediction from genomic data, by using the neural network to maximize the Cox proportional hazards likelihood function of time-to-event data. The proposed method is superior to some other state-of-the-art survival analysis algorithms.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Pros:\\n\\nThis empirical results look quite promising.\", \"cons\": \"1) As the author mentioned in introduction that the proposed problem is able to take care of 'large p small N', it is worth to compare the proposed method to sparse learning / compressed sensing methods, for example, L1 norm regularized algorithm.\\n\\n2) The experiment setting needs further clarification. In Figure 1.b) it says that 'The error bars and shaded areas indicate standard deviation of CI over 10 cross validation sets' and in section 3.3 it says that 'we sampled %70 of the data set 10times without replacement to have 10 different training sets.'\\n\\n3) There is a typo in section 3.1 'Figure ??'.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Nice Applications Paper, Light on Details\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This abstract describes an interesting use of a neural network: discovering an appropriate set of features for a Cox proportional hazards model, with applications to cancer genomics. The idea is to estimate survival of a cancer patient from genetic data. The challenge for this is that many of the data are effectively censored -- we do not know how long currently living patients will survive. This is a well-studied likelihood and the essential proposal is to use the last layer of the neural network to form the linear features used for survival prediction. This last statement is a bit more of an inference than I would like, as the abstract is light on details. There are various architectural choices that seem sensibly handled, and I commend the authors for systematically tuning hyperparameters with Bayesian optimization. I was slightly surprised to see layerwise pre-training used on this model; this approach has gone somewhat out of fashion and it would be interesting if this offered improvements over directly training a multi-layer perceptron. Overall, I think this abstract is well over the bar for acceptance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novel application of neural nets to an important problem with strong empirical results\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The authors describe a straightforward application of modern neural network methods to survival prediction: they replace the linear effects component of a Cox proportional hazards model with a feedforward neural net; or alternatively, the stick a Cox partial likelihood loss function on top of a standard feedforward neural net. The authors demonstrate that they have a mature understanding of modern techniques, which is not always the case among folks pushing deep learning into new application areas (demonstrated most by their use of Bayesian optimization, well done!).\\n\\nThe ReLU neural net clobbers both a classic Cox proportional hazards model with a Elastic Net-style regularization AND the survival analysis analogue of a random forest (often the most daunting foe of neural nets in empirical comparisons). The results are especially remarkable since they were obtained on a small data set (only 658 samples total, with 183 inputs). The deep learning community should celebrate empirical victories of this type (in new domains and problems, done by people who are applying the methods properly).\", \"some_questions_and_comments\": [\"The figure reference at the bottom of Page 2 is broken.\", \"The \\\"%NN\\\" format for numerical percents is non-standard. I suggest the authors change those to \\\"NN%.\\\"\", \"Some discussion of how performance varied with model architecture would be useful. Did 2-layers beat 1-layer and 3-layers?\", \"Unsupervised pretraining isn't typically used these days, especially for shallow neural nets. Did it, in fact, help?\", \"The authors might want to mention and cite which framework (if any) they used to implement their neural nets.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
E8VEwRW9Ji31v0m2iDwv
Stuck in a What? Adventures in Weight Space
[ "Zachary C. Lipton" ]
Deep learning researchers commonly suggest that converged models are stuck in local minima. More recently, some researchers observed that under reasonable assumptions, the vast majority of critical points are saddle points, not true minima. Both descriptions suggest that weights converge around a point in weight space, be it a local optima or merely a critical point. However, it's possible that neither interpretation is accurate. As neural networks are typically over-complete, it's easy to show the existence of vast continuous regions through weight space with equal loss. In this paper, we build on recent work empirically characterizing the error surfaces of neural networks. We analyze training paths through weight space, presenting evidence that apparent convergence of loss does not correspond to weights arriving at critical points, but instead to large movements through flat regions of weight space. While it's trivial to show that neural network error surfaces are globally non-convex, we show that error surfaces are also locally non-convex, even after breaking symmetry with a random initialization and also after partial training.
[ "weight space", "adventures", "critical points", "neural networks", "error surfaces", "converged models", "local minima", "researchers", "reasonable assumptions" ]
https://openreview.net/pdf?id=E8VEwRW9Ji31v0m2iDwv
https://openreview.net/forum?id=E8VEwRW9Ji31v0m2iDwv
ICLR.cc/2016/workshop
2016
{ "note_id": [ "1Wv06PjM0cMnPB1oinX1", "81DG6LXXvu6O2Pl0UVM2", "OM0QpBZzDip57ZJjtNAw" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457550206901, 1457543322169, 1458120525600 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/177/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/177/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/177/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Mostly already known\", \"rating\": \"3: Clear rejection\", \"review\": \"\\\"...descriptions suggest that weights converge around a point in weight space, be it a local optima or merely a critical point. However, it's possible that neither interpretation is accurate.\\\"\\nIndeed those descriptions are not accurate, but this is already well-known and described in the Deep Learning textbook:\", \"http\": \"//www.deeplearningbook.org/version-2016-02-17/contents/optimization.html\\n\\n\\nThe paper is too misleading and takes too much credit for previously known ideas for me to endorse as it is.\\nThe following ideas from the paper are new findings and I encourage you to develop them further for a future\", \"submission\": \"\\\"A small number of principal components explains most of the variance along a training\\ntrajectory.\\\"\\n\\\"All pairs of solutions after a fixed number of epochs appear to be roughly the same euclidean\\ndistance from the origin and from each other. This is true even with identical\\ninitializations, and pretraining before cloning\\\"\\nFrom the experiments in this paper, it's hard to tell whether this effect is real, or if it is an artifact of the simplicity of the MNIST dataset.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Presents preliminary experiments with few clear results as yet\", \"rating\": \"3: Clear rejection\", \"review\": \"\", \"summary\": \"This paper presents several qualitative experiments aimed at understanding the path taken by SGD through weight space.\", \"major_comments\": \"\\u201cFlat region of weight space\\u201d: At several points it is claimed that models don\\u2019t arrive at a critical point but instead a flat region of weight space. Yet a truly flat region is, of course, a critical point or manifold of critical points; and nearly flat regions of weight space are common next to saddle points (see, eg, Saxe et al. ICLR2014). Hence if solutions do enter a flat region, this is not evidence for or against the local minima or saddle point hypotheses.\\n\\nThe observation that different networks can be led to very different solutions by reordering input examples is not surprising given the analyses in eg, Baldi & Hornik, 1989 or Saxe et al., 2014. The main point is that the many symmetries in a deep network lead to a manifold of global minima. This is an infinite set of critical points which all attain equal error. Hence, due to noise (such as reordering input samples), solutions can wander along this manifold\\u2014all global minima are equally good\\u2014and there is no pressure to settle on one particular optimal solution over another. As a simple example, take a deep linear network with just one neuron per layer, y = a*b*x, where a and b are scalar weights, and with just one input example {y=1, x=1}. The manifold of global minima is the hyperbola on which a*b=1. The euclidean distances between different solutions can thus clearly be arbitrarily large (one solution is a=1/10, b = 10; another is a=10, b=1/10), and as large as the euclidean distance to the origin.\\n\\nSome results in this paper are already known. For instance, Saxe et al. ICLR 2014 showed that gradient descent learning trajectories are nonlinear. Indeed, Goodfellow et al. ICLR2014 also found that gradient descent does not take the straight line path from initialization to eventual solution.\\n\\nAdditional experimental details are necessary to evaluate the paper. What loss function is optimized? What exact initialization is used? These are centrally important to evaluating the results here. In particular, a small random initialization will perform differently from a large-norm initialization.\\n\\nThere are some conceptual confusions. At several points it is claimed that \\u201cafter symmetry is broken\\u201d the error surface is still nonconvex. Finding a particular initial condition for which symmetry is broken does not make a minimization problem convex. The non convexity of a minimization problem can influence gradient dynamics even away from critical points, for example, it can induce long nearly flat plateaus.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Not enough content or results to merit publication\", \"rating\": \"3: Clear rejection\", \"review\": \"The submission presents an experimental setup for analyzing the successful gradient-based optimization and performance of networks with large numbers of parameters. They propose to train a convolutional network on MNIST and analyze the gradient descent paths through weight space. The trajectories are compared and evaluated using PCA.\\n\\nThis is very similar to the approach taken by Goodfellow et al, and it is difficult to see any new contributions of this submission. The results are mostly well-known at this point, although there is certainly room for further research in this area. The demonstration of divergence during training because of shuffled inputs is interesting but not surprising. There are no new visualizations or qualitative results, and the quantitative results are limited to 2 numbers (the variance explained by the top 2 and top 10 principal components) which are meaningless without more extensive comparison and analysis.\\n\\nThe submission could be a white paper to justify some further research, but it does not have enough substance or novelty to be in the ICLR workshop.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
mO9m42yWgSj1gPZ3UlGA
Rectified Factor Networks for Biclustering
[ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ]
Biclustering is evolving into one of the major tools for analyzing large datasets given as matrix of samples times features. Biclustering has been successfully applied in life sciences, e.g. for drug design, in e-commerce, e.g. for internet retailing or recommender systems. FABIA is one of the most successful biclustering methods which excelled in different projects and is used by companies like Janssen, Bayer, or Zalando. FABIA is a generative model that represents each bicluster by two sparse membership vectors: one for the samples and one for the features. However, FABIA is restricted to about 20 code units because of the high computational complexity of computing the posterior. Furthermore, code units are sometimes insufficiently decorrelated. Sample membership is difficult to determine because vectors do not have exact zero entries and can have both large positive and large negative values. We propose to use the recently introduced unsupervised Deep Learning approach Rectified Factor Networks (RFNs) to overcome the drawbacks of FABIA. RFNs efficiently construct very sparse, non-linear, high-dimensional representations of the input via their posterior means. RFN learning is a generalized alternating minimization algorithm based on the posterior regularization method which enforces non-negative and normalized posterior means. Each code unit represents a bicluster, where samples for which the code unit is active belong to the bicluster and features that have activating weights to the code unit belong to the bicluster. On 400 benchmark datasets with artificially implanted biclusters, RFN significantly outperformed 13 other biclustering competitors including FABIA. In biclustering experiments on three gene expression datasets with known clusters that were determined by separate measurements, RFN biclustering was two times significantly better than the other 13 methods and once on second place.
[ "fabia", "bicluster", "factor networks", "biclustering", "samples", "features", "code unit", "rectified factor networks", "major tools", "large datasets" ]
https://openreview.net/pdf?id=mO9m42yWgSj1gPZ3UlGA
https://openreview.net/forum?id=mO9m42yWgSj1gPZ3UlGA
ICLR.cc/2016/workshop
2016
{ "note_id": [ "MwVkz1XAyCqxwkg1t7QQ", "gZ9Bk8158UAPowrRUAZW", "XL9yKq64ZiXB8D1RUGmE" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457638912745, 1458220428119, 1457643143391 ], "note_signatures": [ [ "~Peter_Sadowski1" ], [ "ICLR.cc/2016/workshop/paper/128/reviewer/10" ], [ "~Jimei_Yang1" ] ], "structured_content_str": [ "{\"title\": \"Interesting algorithm with great experiments.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper uses an unsupervised generative model proposed at NIPS 2015 as the basis of a new biclustering algorithm that alleviates the scalability limitations of similar algorithms. The new algorithm demonstrates excellent performance in an experimental comparison against 13 other biclustering algorithms on 4 simulated data sets and 3 real data sets.\\n\\nThe paper is clear and of high quality. Algorithms like these are widely used in bioinformatics, so this novel work could have high impact if it becomes a standard tool for analyzing gene-expression data.\", \"pros\": [\"Clear presentation.\", \"Large, high-quality experimental comparison to 13 other methods, on both simulated and real data.\"], \"cons\": [\"None\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This work present a probabilistic method for biclustering problems. The paper is well written and the empirical evalaution is exntensive.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"Bi-clustering algorithms are relevant in many real applications. In this work, the authors address the problem by means of Rectified Factor Networks, that they us to formulate the problem in terms of a sparse latent variable model.\\n\\nThe paper is well written, novel, and the experimental section is extensive and convincing. I think that this work should be presented in the workshop sessions on ICLR.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A solid work\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper formulates the problem of biclustering as a sparse latent variable model where one entry of latent factors represents one bicluster, and solved it by using a recently proposed Rectified Factor Networks (RFNs). Experiments were carried out on two simulation setups and three real datasets with comparisons against 13 algorithms.\\n\\n+A principled solution to biclustering.\\n+Solid results with improvements over previous arts.\\n+Well written and easy to read\\n\\n-\\\"Laplacian prior on the parameters\\\". It will be better to present more details and to discuss how this prior influences the results (including the dropout).\\n-Some presentation issues. Is there a reference to QSTAR? What is full name of IBD? \\\"Du to\\\" --> \\\"Due to\\\"?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
XL9vWQ4glHXB8D1RUGVj
A Neural Architecture for Representing and Reasoning about Spatial Relationships
[ "Eric Weiss", "Brian Cheung", "Bruno Olshausen" ]
We explore a new architecture for representing spatial information in neural networks. The method binds object information to position via element-wise multiplication of complex-valued vectors. This approach extends Holographic Reduced Representations by providing additional tools for processing and manipulating spatial information. In many cases these computations can be performed very efficiently through application of the convolution theorem. Experiments demonstrate excellent performance on a visuo-spatial reasoning task as well as on a 2D maze navigation task.
[ "neural architecture", "spatial relationships", "spatial information", "new architecture", "neural networks", "object information", "position", "multiplication", "vectors", "holographic" ]
https://openreview.net/pdf?id=XL9vWQ4glHXB8D1RUGVj
https://openreview.net/forum?id=XL9vWQ4glHXB8D1RUGVj
ICLR.cc/2016/workshop
2016
{ "note_id": [ "D1VXGDpQxc5jEJ1zfEkZ", "91EnOB0NmSkRlNvXUV1N", "ZY9AAqBvoS5Pk8ELfEPz" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457678146954, 1458070915642, 1457675733076 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/181/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/181/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/181/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"An interesting idea that should be further developed\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper presents an interesting application of HRRs to spatial location encoding. This approach appears novel and possibly a fruitful direction. The current work cursorily presents the idea, and two demonstrations. In its current form, the work lacks sufficient detail to justify the direction over other approaches and/or sufficient quantitative findings to justify the direction over other approaches.\\n\\nAs a workshop submission, I believe that preliminary ideas are valid contributions. However, this submission could use a bit more development to communicate the idea and the demonstrations presented.\\n\\nSome questions/comments:\\nI could use some additional explanation as to why location and identity should be combined through complex multiplication, while multiple glances should be combined through complex addition.\\n\\nWhy in your formulation is \\\"c\\\" complex, instead of real?\\n\\nWhy is the dimensionality of c the same as r? Is this a limitation of your approach? Are there more general formulations that would decouple the representational power of these two factors?\", \"pros\": [\"Novel idea\", \"Interesting, unsolved problem\"], \"cons\": [\"preliminary experiments without detail or comparisons\", \"needs more exposition to convey model set up and choices\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Holographic representations to encode spatial relationships\", \"rating\": \"3: Clear rejection\", \"review\": [\"This paper considers the problem of encoding spatial relationship in a fixed-size representation, and leverage the idea of HRR. The paper gives two application use-cases: MNIST and path planning.\", \"Hereafter I give how I see the pros and cons of this paper.\", \"The idea of using HRR is interesting\", \"I like the path planning application.\", \"spatial relationship that can be integrated in neural nets is certainly useful\", \"The novelty does not look very high. Considering the prior works on HRR and spatial relationships, this looks like a straightforward application paper.\", \"The literature review is minimalist. In particular, many works in computer vision have addressed the problem of encode spatial relationship. This has also been done with complex vectors, for instance the recent paper by Bursuc and Tolias in the journal Computer Vision and Image Understanding: ``Rotation and translation covariant match kernels for image retrieval'', which also exploits the convolution theorem (as in HRR anyway). As a result, it is not clear to me if there is anything novel in this submission. The worse is probably that CNN or its spatial extensions (like transformers networks) are not even mentioned, see my last point below.\", \"Poor experiments compared to what is normally done in papers dealing with images. There is not evaluation nor any comparison to any reasonable concurrent method, like a CNN architecture: CNN representations are state of the art, and thanks to convolution they also implicitly encode the translation invariance. They are now routinely used in computer vision system. It is very surprising that these papers are not cited/mentioned given the massive impact they had in the previous years.\", \"In my opinion details are missing for the idea to be reproducible.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review for \\\"A Neural Architecture for Representing and Reasoning about Spatial Relationships\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper presents a method for binding information to spatial locations in frequency space based on Holographic Reduced Representations, allowing set of (data, location) tuples to be encoded in a single complex vector of fixed size. The operation is differentiable, and can be inserted into neural networks to endow them with the ability to perform spatial reasoning. The approach is validated by experiments on two toy problems: identifying MNIST digits based on spatial queries, and performing value iteration for path planning.\\n\\nOver the past year we have seen many approaches for inserting spatial attention into neural network models such as soft and hard attention over discrete locations, and differentiable forms of spatial attention as seen for instance in DRAW (Gregor et al, 2015) and Spatial Transformer Networks (Jaderberg et al, 2015). In light of this prior work, I believe that the frequency-domain based approach for spatial reasoning put forth by this paper is indeed novel and interesting; however the paper in its current form does not do a good job of explaining its relationship and significance with respect to these existing methods.\\n\\nMy major complaints about the paper revolve around the quality of its experiments and in the clarity of exposition describing them. For the first experiment: what, concretely, are the inputs and outputs of the network? What is the network architecture? How exactly are the spatial queries encoded and passed as input to the network? How does the network specify its glimpse locations, and in what format does it receive information about its glimpses? How is it trained? How well do baseline methods perform on this task and dataset? As written, the experiments do not contain enough information to highlight the strength of the proposed method.\\n\\nThe details of the second experiment are mostly left as an exercise to the reader, and the only experimental result is a single qualitative example.\\n\\nPROS\\n- The underlying idea has is interesting and has merit\\n\\nCONS\\n- Relationship to prior work is not clearly discussed\\n- Numerous details are missing from experiments\\n- The proposed approach is not tested against any baseline methods\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
0YrNNxQkGIGJ7gK5tRwG
Initializing Entity Representations in Relational Models
[ "Teng Long", "Ryan Lowe", "Jackie Cheung", "Doina Precup" ]
Recent work in learning vector-space embeddings for multi-relational data has focused on combining relational information derived from knowledge bases with distributional information derived from large text corpora. We propose a simple trick that leverages the descriptions of entities or phrases available in lexical resources, in conjunction with distributional semantics, in order to derive a better initialization for training relational models. Applying this trick to the TransE model results in faster convergence of the entity representations, and achieves small improvements on Freebase for raw mean rank. More surprisingly, it results in significant new state-of-the-art performances on the WordNet dataset, decreasing the mean rank from the previous best 212 to 51. We find that there is a trade-off between improving the mean rank and the hits@10 with this approach. This illustrates that much remains to be understood regarding performance improvements in relational models.
[ "entity representations", "relational models", "mean rank", "embeddings", "data", "relational information", "knowledge bases", "distributional information", "large text corpora" ]
https://openreview.net/pdf?id=0YrNNxQkGIGJ7gK5tRwG
https://openreview.net/forum?id=0YrNNxQkGIGJ7gK5tRwG
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAV39KvWmix0Wk76TAy0", "lx93g8LwBi2OVPy8Cvyg" ], "note_type": [ "review", "review" ], "note_created": [ 1457115433536, 1456627184622 ], "note_signatures": [ [ "~Marc_Baroni1" ], [ "ICLR.cc/2016/workshop/paper/118/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"Potentially promising research line, not too innovative, and requring better evaluation\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes to sum the distributional vectors of the words that occur in the WordNet?FreeBase glosses of an entity to derive a new distributional representation of the entity. Entity representations derived in this way are then used as initialization vectors in the TransE model, that generalizes relationships between entities in a knowledge base. The resulting approach obtains competitive results, in particular on a WordNet-based benchmark.\\n\\nThe paper is generally clear, the approach is simple and elegant, and potentially very useful, since it might address the important issue of how to assign distributional representations to out-of-vocabulary words. At the same time, as the authors recognize, the approach is not novel. Indeed, they should cite references going much further back than Chen et al 2014--at the very least, Hinrich Schuetze's seminal 90s work.\\n\\nGiven that the crucial advantage of the proposed approach is to extend coverage of distributional models, I found it disappointing that the authors did not frame the evaluation in terms of coverage. They mention in the introduction that 50-80% of the entities in the benchmark are not in the word2vec and GloVe spaces. How does the proposed model perform on tuples involving these entities? How do they compare, with respect to these entities, to models that are initialized with plain word2vec/GloVe vectors, restricted to the in-vocabulary entities only? (with random initializations for the oov entities).\\n\\nMore generally, given that models directly initialized with corpus-trained w2v/GloVe vectors for the entities are the most obvious comparison points, results should have been reported for the latter also with GloVe, and also for WordNet. The latter comparison seems crucial, since WordNet is where the proposed model shines.\", \"minor_points\": \"I would have liked a short description of TransD, to get a sense of how much simpler the proposed approach really is.\\n\\nAlso, the difference between the filtered and raw setups should be explained.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"No contribution.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a way of initializing vectors of entities from their definitions present in different kinds of resources. For a given entity its averages the vectors of the words presents in the entity description and assigns it as the initialized entity description. This has been done several times in the past work, and does not present any interesting insight. Although, the authors present improvements on some task, still the work has no insight to contribute to the community.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
wVqzL1ypocG0qV7mtLqm
Alternative structures for character-level RNNs
[ "Piotr Bojanowski", "Armand Joulin", "Tomas Mikolov" ]
Recurrent neural networks are convenient and efficient models for learning patterns in sequential data. However, when applied to signals with very low cardinality such as character-level language modeling, they suffer from several problems. In order to success- fully model longer-term dependencies, the hidden layer needs to be large, which in turn implies high computational cost. Moreover, the accuracy of these models is significantly lower than that of baseline word-level models. We propose two structural modifications of the classic RNN LM architecture. The first one consists on conditioning the RNN both on the character-level and word-level information. The other one uses the recent history to condition the computation of the output probability. We evaluate the performance of the two proposed modifications on multi-lingual data. The experiments show that both modifications can improve upon the basic RNN architecture, which is even more visible in cases when the input and output signals are represented by single bits. These findings suggest that more research needs to be done to develop general RNN architecture that would perform optimally across wide range of tasks.
[ "models", "modifications", "alternative structures", "rnns alternative structures", "convenient", "efficient models", "patterns", "sequential data", "signals" ]
https://openreview.net/pdf?id=wVqzL1ypocG0qV7mtLqm
https://openreview.net/forum?id=wVqzL1ypocG0qV7mtLqm
ICLR.cc/2016/workshop
2016
{ "note_id": [ "nx9DJNYBNU7lP3z2iopJ", "jZ9yl678XhnlBG2Xfzjq" ], "note_type": [ "review", "review" ], "note_created": [ 1458081514046, 1458149299724 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/158/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/158/reviewer/10" ] ], "structured_content_str": [ "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper explores two augmentations to simple character RNN language models. Closing the performance gap between character and word based language models is an obvious and important research goal. While the aims of this paper interesting, at this point it looks more like a work in progress with a number of significant gaps, particularly in the evaluation.\", \"general_points\": [\"Given that simple RNNs struggle to capture long range dependencies, would it not be better to do this study with LSTMs? The assumption that the conclusions drawn for simple RNNs will hold for LSTMs seems flawed.\", \"Equation 3 is not a correct likelihood function if the vocabularies of the two mixture components are not equal, which they are not in this instance. Section 5 implies that this function is used to calculate the BPC evaluation metric. This would result in overly optimistic BPC scores for the Mixed model.\", \"The conditional output RNN is an interesting way to get direct ngram context into the model. It might be worth noting the similarity to the Multiplicative RNN of Sutskever 2011, where that model uses a conditional tensor contraction for the transition weights. There are many other ways to incorporate ngram conditioning, for example using direct ngram features on the input or feeding more than one character to the input layer. It would be informative to see how these other, easy to implement, approaches compare to the proposed approach.\", \"The experimental evaluation is let down by poor choices of data sets. Firstly the datasets are all rather small compared to those usually used for language model evaluation. The processing of the PTB data set is particularly quirky, i.e. lowercased, punctuation removed, and <UNK> symbols for rare words (I assume predicted as < U N K > !). It is not clear why this processing is a good choice for a data set for evaluating a character LM. While the Europarl data set (at least v7, it looks like you used v1?) is a good choice for a multilingual corpus, training on just 60k sentences seems a bit unambitious. Also, if the sentences were randomly selected for test this could mean that test sentences came from the same documents as training sentences? This is not desirable for a LM evaluation.\", \"Where did the estimate of '4 times the average entropy for OOVs' come from? This seems a bit random. It would be easy to build a character ngram/RNN estimate for singletons. This practice used to be standard for open vocabulary language models.\", \"When reporting BPC metrics on text it is useful to report the performance of standard compression algorithms such as PAQ. These almost always significantly beat ngram and RNN models.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper introduces two model extensions to improve character level recurrent neural network language models. The authors evaluate their approaches on a multilingual language modeling benchmark along with the standard Penn Tree Bank Corpus. Evaluation uses only entropy rather than including the language model in a downstream task but that's okay for a paper of this scope. The paper is clearly written and definitely a sufficient contribution for the workshop track it would be really nice to see how well these methods can improve and more sophisticated recurrent architecture like gru or lstm units. On the PTB Corpus it would be nice to include a state-of-the-art or standard n-gram model to use as a reference point for the reported results.\\n\\nThe conditioning on words model is an interesting approach. It's unfortunate that such a small word level vocabulary is used with this model. It seems like the small vocabulary restriction is due to the fact that the word level model is jointly trained along with the character models. An alternative approach might be to use as input features the hidden representations from a word level recurrent model already trained when building the Character level language model. I don't have a good sense for how much joint training of both models matters. \\n\\nWhen conditioning on recent history the authors might think about the NLP context trick of conditioning on a bag of words or bag of characters instead of considering only 10 grams. This would allow for a broader context coverage without expanding the feature dimension too much\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BNYYGWVA1F7PwR1riED4
A Minimalistic Approach to Sum-Product Network Learning for Real Applications
[ "Viktoriya Krakovna", "Moshe Looks" ]
Sum-Product Networks (SPNs) are a class of expressive yet tractable hierarchical graphical models. LearnSPN is a structure learning algorithm for SPNs that uses hierarchical co-clustering to simultaneously identifying similar entities and similar features. The original LearnSPN algorithm assumes that all the variables are discrete and there is no missing data. We introduce a practical, simplified version of LearnSPN, MiniSPN, that runs faster and can handle missing data and heterogeneous features common in real applications. We demonstrate the performance of MiniSPN on standard benchmark datasets and on two datasets from Google's Knowledge Graph exhibiting high missingness rates and a mix of discrete and continuous features.
[ "real applications", "minimalistic", "network", "spns", "learnspn", "discrete", "data", "minispn", "networks", "class" ]
https://openreview.net/pdf?id=BNYYGWVA1F7PwR1riED4
https://openreview.net/forum?id=BNYYGWVA1F7PwR1riED4
ICLR.cc/2016/workshop
2016
{ "note_id": [ "5Qzq17MkrTZgXpo7i3lv", "q7kJ6X1Lvt8LEkD3t7rn", "GvNnQkKnAH1WDOmRiMYy", "ZY95vpnmnU5Pk8ELfEPA", "VAVwGLVglsx0Wk76TAQ5", "q7kJP5Jqgt8LEkD3t7qA", "oVWR5WlnKcrlgPMRsB7g" ], "note_type": [ "comment", "review", "comment", "review", "review", "comment", "comment" ], "note_created": [ 1458230941705, 1458074740794, 1458562557954, 1458074128877, 1457645145585, 1458183370332, 1458641198979 ], "note_signatures": [ [ "~antonio_vergari1" ], [ "ICLR.cc/2016/workshop/paper/8/reviewer/11" ], [ "~Viktoriya_Krakovna1" ], [ "ICLR.cc/2016/workshop/paper/8/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/8/reviewer/12" ], [ "~Viktoriya_Krakovna1" ], [ "~antonio_vergari1" ] ], "structured_content_str": [ "{\"title\": \"Limiting the number of child nodes affects the co-clustering partitioning\", \"comment\": \"We proposed to limit the number of the data matrix splits to two (i.e. number of node children while growing the network) in \\\"Simplifying, Regularizing and Strengthening Sum-Product Networks Structure Learning\\\" at ECML last year. One of the point why this improves the likelihood scores it is that it slow downs the two partitioning greedy processes, which you can view as a sort of greedy hierarchical co-clustering. That is, each following steps is less prone to incur in a local optimum. We have also demonstrated empirically that this improves structure quality making the network deeper and potentially with less edges.\\n\\nI share with the other reviewer some concerns about the lack of motivations behind algorithmic choices. I will try to write something later.\"}", "{\"title\": \"Seems practical but not particularly innovative\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents some tweaks to an existing sum product network (SPN) structure learning algorithm, LearnSPN. The original algorithm works by recursively partitioning data either according to variables or instances. If partitioning by instances, instances are clustered and aggregated by a sum node (mixture assumption). If partitioning by variables, the aim is to find independent subsets of variables on the given set of instances, and then to aggregate with a product node (independence assumption).\", \"the_contribution_of_the_paper_is_to_make_some_minor_tweaks_to_the_learnspn_algorithm\": \"1. Discretize continuous variables using a binary threshold on the median value when performing independence tests.\\n2. Clustering is done more crudely, with a simple guess-and-check approach, where a clustering is proposed, then accepted or rejected based on a validation set.\\n3. To handle missing data in the variable partitioning step that tests pairwise independence, only consider rows that have both variables present.\\n\\nEach of these contributions is very minor, and they seem like the first thing to try rather than the result of a long thought-out consideration of the design space. The main source of interest is that the method is orders of magnitude faster than LearnSPN and performs better than an alternative \\u201cPareto\\u201d algorithm that is 1-5x slower.\\n\\nOverall, the strengths are that the method seems reasonable and practical. The weaknesses are that the originality is low, and there is not much in the way of technical contributions.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting paper\", \"comment\": \"Thanks Antonio for the link to your paper! Our greedy approach seems similar to your SPN-B method, except that it builds deeper SPNs by alternating between two-way instance splits and two-way variable splits, while MiniSPN follows a successful split with more splits of the same type on the sub-slices. I have updated our paper to mention this.\"}", "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes a novel algorithm for SPN, which is both faster and more general. The original SPN algorithm is a EM procedure starting from a fully connected graph and where edges are removed if the weights are zero (see Algorithm 1 in Poon and Domingos, 2011).\\n\\nAccording to the authors, the original algorithm (called LearnSPN) requires a careful hyper-parameters search in order to work. Their algorithm, MiniSPN, is avoiding this problem and is more general.\\n\\n\\\"In the variable partition step\\\" (which I think is the E-step), their algorithm do \\\"the two-way Chi-square test of independence\\\" over of non-missing variables. Overall along with other tricks, this is suppose to make the E-step more greedy (and thus faster and maybe more robust?)/\\n\\nThey compare their approach to learnSPN on 2 datasets from the Knowledge Graph People collection. On the first task, because of the missing data they don not compare to learnSPN, but as mention by the other reviewer, they could have use the same trick on LearnSPN in order to make it work on this task. On the second task, the performance of MiniSPN and LearnSPN are basically the same, but MiniSPN has an impressive speed up of 500x.\\n\\nOverall, despite some promising results (very fast with same performance), the lack of analysis of their algorithm and the impact of their contribution make the paper a bit weak. Considering this paper is about an algorithm I m surprised that they never explicitly write it down nor make a light analysis of its convergence rate, guarantees and so on. Most modifications to the original algorithm are not motivated. It is hard to understand their design choices.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Faster, but weakly motivated and analyzed\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose an approach for learning the structure of Sum-Product Networks that extends the original LearnSPN algorithm to cases involving missing data and mixed discrete/continuous feature sets. The original LearnSPN works by recursively co-clustering variables and instances into approximately independent sets and similar instances.\\n\\nThe proposed algorithm MiniSPN differs from LearnSPN in a few ways. First by using a 2-way Chi squared test of independence, applied only to rows where the variable pairs are not missing, as their \\\"independence oracle\\\" rather than the G-test. Continuous data is also binned lazily for each batch of instances by binarizing around the median. The main contribution seems to come from simplifying the mixture model step into a greedier (presumably this is where the computational speedup comes from) approach that discards the cluster penalty prior and uses hard-EM.\\n\\nThey present two sets of experiments. The first is on data from the Knowledge Graph, and shows their algorithm to achieve better fit and runtime than the alternatives. I am confused by the claim that they could not test the original LearnSPN on this data because of missing data problems, as their fix for missing data seems easily applicable to LearnSPN as well. Experiments on the standard datasets show their method to be competitive with the original LearnSPN but much faster.\\n\\nI have some concerns about the work. The contributions of the work and their importance are not sufficiently analyzed / discussed. The rationale for replacing the the G-test with Chi-squared is not discussed, and likelihood ratio testing (G-test) should generally be optimal. The extension to handle missing data is very simple and applicable to any of the competing algorithms. The contribution of the work seems to be the large speed increase from the simpler clustering sub-routine and the lazy binning of continuous values.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks to the reviewers for the helpful feedback!\", \"comment\": \"Reviewer 12's suggestion to apply the missing data fix to the original LearnSPN makes a lot of sense, and has unfortunately not occurred to me. Reviewer 10's suggestion to analyze convergence rates does not seem applicable, since part of the advantage of SPNs is, to quote Gens&Domingos 2013, that \\\"inference in SPNs does not involve diagnosing convergence as with MCMC or BP\\\". I definitely agree that there is room for other kinds of analysis of the algorithm, e.g. robustness analysis to see if repeated runs of the algorithm on the same data produce similar top-level splits.\\n\\nOur motivation for the simplified clustering step is that the original LearnSPN clustering step seems unnecessarily complex, involving a penalty prior, EM restarts, and hyperparameter search. It is by far the most complicated part of the algorithm in a way that seems difficult to justify, and likely the most time-consuming due to the restarts and hyperparameter tuning. Our greedy approach eliminates the need for these intricacies, and seems like a natural choice of simplification - an extension of the greedy approach used in the the top level of the LearnSPN algorithm. The chi-square test is used for infrastructure-specific implementation reasons, and does not make a significant difference, being asymptotically equivalent to the G-test. \\n\\nWhile each of our contributions is simple, together they have a large impact on the usability of the algorithm, by significantly increasing speed and making it applicable to more data sets.\", \"edit\": \"I have uploaded a lightly revised version of the paper.\"}", "{\"title\": \"Actually SPN-B allows for splits of the same type\", \"comment\": \"In SPN-B we just limit the number of sub-slices to split into, then the algorithm can always decide to let a split on one axis to be followed by another split on the same axis.\\nThe reason behind the higher likelihood scores is that if you split one slice into 10 parts at once on one dimension during the early steps you are more likely to commit some mistake than splitting it by two first, then checking if a split on the other dimension is feasible (and if the answer is no, keeping on splitting on the first dimension, and so on...)\\nThis is the same as you do, I suppose.\"}" ] }
E8VEozRYyi31v0m2iDwy
Deep Autoresolution Networks
[ "Gabriel Pereyra", "Christian Szegedy" ]
Despite the success of very deep convolutional neural networks, they currently operate at very low resolutions relative to modern cameras. Visual attention mechanisms address this by allowing models to access higher resolutions only when necessary. However, in certain cases, this higher resolution isn’t available. We show that autoresolution networks, which learn correspondences between lowresolution and high-resolution images, learn representations that improve lowresolution classification - without needing labeled high-resolution images.
[ "images", "success", "low resolutions", "cameras", "models", "access higher resolutions", "necessary" ]
https://openreview.net/pdf?id=E8VEozRYyi31v0m2iDwy
https://openreview.net/forum?id=E8VEozRYyi31v0m2iDwy
ICLR.cc/2016/workshop
2016
{ "note_id": [ "GvV45wgY5S1WDOmRiMpL", "3QxzAmOmgCp7y9wltPQx", "E8V39KGlYT31v0m2iDp0", "3Qxz3KoOWSp7y9wltPDr" ], "note_type": [ "review", "review", "comment", "review" ], "note_created": [ 1456882042576, 1457651680745, 1457046958528, 1457594324243 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/141/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/141/reviewer/11" ], [ "~Gabriel_Pereyra1" ], [ "ICLR.cc/2016/workshop/paper/141/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"Good idea but missing some details\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes deep auto-resolution networks using unsupervised context information. The model consists of two convolution towers, one with 17 convolution layers on high-resolution image patch and the other with 40 inception-style layers on low-resolution full image. The outputs of both towers are concatenated and fed into a classifier or a regressor. The classifier is used to predict whether the high-resolution image patch is in the image and the regressor is used to predict the location of the high-resolution image patch inside the image. No annotations are required to train this network and experimental results show that the 40 layer embedding tower outperforms a random initialized network on cifar10 dataset.\", \"the_strengths_of_this_paper_include\": \"1) Learning from unsupervised context info is a very interesting topic. It enables us to train very deep networks to learn representations without any annotations. \\n2) Experimental results show that the network can learn some representations and improve the classification task.\", \"i_have_the_following_concerns_about_this_paper\": \"1) There is no network architecture details in the paper. Without that information, it is hard for other people to reproduce the results. \\n2) Also, no results reported in terms of the accuracy or error for the classifier and regressor net. It would be great to know how accurate the network can predict the context info and how much training data required to learn it.\\n3) The experiments only compare random initialization and the pre-trained auto resolution network. It would be great to compare with other unsupervised network as pretrained networks. \\n\\nIn general, I think the idea of this paper is great but the presentation and experiments are not satisfactory.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting idea, explore further\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This is a nice idea, but I am not sure if it is being used to do something really novel yet, in comparison with Siamese networks or convolutional autoencoders.\\n\\nThere seem to be two ways for the network to solve the high-res-patch/low-res-image matching problem.\\nThe lazy way - downscale the patch, and compare it to small regions of the image.\\nThe clever way - look at the patch, deduce what it is, i.e. dog fur, and then check if that matches the image, i.e. is\\nit an image of a dog, or possibly a fur coat?\\n\\nIf the network is solving the problem the lazy way, then the results should be similar to what you would get using a\\nconvolutional autoencoder. If it is solving the problem the clever way, then this a great new way to unsupervised\\nlearning for images. The experimental evidence seems to be rather hastily put together; it is not clear yet that advantage is being taken of the difference in resolution patch-vs-image.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Model architecture and classification heatmaps\", \"comment\": \"Attached are links to the model architecture we used and qualitative examples of a trained model.\\n\\nFor the qualitative examples, the black and white square is a heat map, where white represents that the model predicted that a high-resolution patch from that location belonged to a low-resolution image (not shown). The examples with two pictures correspond to negative examples - the model should predict all black instead of all white. The results are after the model was trained on 20 million images on an internal Google dataset and the examples are all sampled from ImageNet's validation set.\\n\\nFor Cifar10, we are not aware of any similar results to compare with. Additionally, we would like to stress that the goal of our architecture is not to improve classification performance on Cifar10, this was just an easy way for us to demonstrate that our model learned useful representations.\", \"model_architecture___https\": \"//www.dropbox.com/s/9c892ka2fjwoz0c/model_architecture.pdf?dl=0\", \"classification_heatmaps___https\": \"//www.dropbox.com/s/yan30cs21n5cma4/classification_examples.html?dl=0\"}", "{\"title\": \"A reasonable but not fully-explored idea; more comparisons would be helpful.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes an unsupervised learning strategy that uses high-resolution and low-resolution image correspondence as a surrogate task to initialize networks for image classification. Two towers are trained to produce a feature representation that allows a classifier to determine whether a cropped or blurred patch is a part of an original image. Tests on CIFAR-10 show that the learned representation for the low-resolution representation is a useful initialization for classification.\", \"pros\": \"Overall, this is an interesting unexplored criterion for pre-training a network (even though pre-training seems to have gone out of style).\\nOn the experiments presented, the pre-trained solution is at least better than starting from random.\", \"cons\": \"30% relative gain is a nice result, but this is against a non-pretrained-baseline. How does this compare to stronger efforts that include some form of pretraining? Considering the long list of methods that provide a gain for this benchmark, more comparisons seem important.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
OM0jvwB8jIp57ZJjtNEZ
Incorporating Nesterov Momentum into Adam
[ "Timothy Dozat" ]
This work aims to improve upon the recently proposed and rapidly popular- ized optimization algorithm Adam (Kingma & Ba, 2014). Adam has two main components—a momentum component and an adaptive learning rate component. However, regular momentum can be shown conceptually and empirically to be in- ferior to a similar algorithm known as Nesterov’s accelerated gradient (NAG). We show how to modify Adam’s momentum component to take advantage of insights from NAG, and then we present preliminary evidence suggesting that making this substitution improves the speed of convergence and the quality of the learned mod- els.
[ "adam", "nesterov momentum", "momentum component", "nag", "work", "main", "regular momentum", "ferior" ]
https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ
https://openreview.net/forum?id=OM0jvwB8jIp57ZJjtNEZ
ICLR.cc/2016/workshop
2016
{ "note_id": [ "nx924kDvKc7lP3z2iomv", "5Qz2J1LDEsZgXpo7i32z" ], "note_type": [ "review", "review" ], "note_created": [ 1457640322723, 1457526791620 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/107/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/107/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Adam with Nesterov Momentum\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": [\"This paper investigates a variant of the Adam optimization algorithm, where the first moment estimate (momentum) is replaced with a Nesterov momentum. The resulting algorithm, coined Nadam, performs well on a nontrivial task (convolutional autoencoder).\", \"The paper is well written and easy to read.\", \"It should be mentioned that the proposed method is computationally slightly more expensive than Adam. However this is not a big deal for models with weight-sharing (e.g. CNNs and RNNs).\", \"The preliminary result looks promising\", \"Some suggestions/questions:\", \"Section 4 (experiment) mentions that only the learning rate is tuned for Adam/Nadam, but the hyper-parameter 'mu' is set to the atypical value of .975, which suggests it is (somewhat) tuned to this problem particular. Its value might be arbitrary, but it might be better if this was either set to its Adam default (.9) or tuned with the learning rate.\", \"It would be useful to derive an upper bound on the magnitude of the weight update. Adam's simple known upper bound is an advantage and would be nice to have for Nadam as well.\", \"Section 3 mentions \\\"It often helps to gradually increase or decrease mu over time\\\". Increasing mu_t over time makes intuitive sense (to counter the typically decreasing signal-to-noise ratio of gradients over time), but in which situations would it help to decrease mu over time? Is there published work that investigates particular schedules for mu_t?\", \"Overall a very exciting potential improvement on Adam.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Hyper-parameter selection\", \"rating\": \"7: Good paper, accept\", \"review\": \"Inspired by the Nesterov's accelerated gradient, the paper proposes a new variant of Adam algorithm with a modified momentum term.\\nThe contribution is incremental but might be of some interest if a broader benchmarking would be provided.\", \"pros\": \"i) The proposed modification is easy to implement\\nii) Figure 1 shows an acceleration sufficient to consider the algorithm for further investigations\", \"cons\": \"i) The empirical validation is rather poor. The algorithm should be tested in situations where the original Adam shows \\nstate-of-the-art performance. Then we would better see how much we can gain with Nadam. \\nii) Figure 1 can be an artifact of the chosen hyper-parameter values. A more detailed hyper-parameter selection should be provided for Adam and Nadam. One can fix some hyper-parameters and then select two others which seem to be the most important. Then make a contour plot with x-axis: hyper-parameter1 , y-axis: hyperparameter2, z: number of epochs to reach MSE 0.01 of the provided example. Alternatively, one can run hyper-parameter optimization.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
k80kv3zjGtOYKX7ji4V7
Deep Motif: Visualizing Genomic Sequence Classifications
[ "Jack Lanchantin", "Ritambhara Singh", "Zeming Lin", "& Yanjun Qi" ]
This paper applies a deep convolutional/highway MLP framework to classify genomic sequences on the transcription factor binding site task. To make the model understandable, we propose an optimization driven strategy to extract “motifs”, or symbolic patterns which visualize the positive class learned by the network. We show that our system, Deep Motif (DeMo), extracts motifs that are similar to, and in some cases outperform the current well known motifs. In addition, we find that a deeper model consisting of multiple convolutional and highway layers can outperform a single convolutional and fully connected layer in the previous state-of-the-art.
[ "motifs", "deep motif", "deep", "mlp framework", "genomic sequences", "transcription factor", "site task", "model understandable" ]
https://openreview.net/pdf?id=k80kv3zjGtOYKX7ji4V7
https://openreview.net/forum?id=k80kv3zjGtOYKX7ji4V7
ICLR.cc/2016/workshop
2016
{ "note_id": [ "wVqJP4Z6RHG0qV7mtL9p", "Qn8Z9M7OmskB2l8pUYQ2", "gZ9vyE9gDcAPowrRUAZ3", "jZWQPLjZ2InlBG2XfzxV", "JykDR7vZnhqp6ARvt5y2", "6XponRj21hrVp0EvsE2J" ], "note_type": [ "review", "review", "comment", "comment", "comment", "comment" ], "note_created": [ 1457191005656, 1458488006819, 1458090061182, 1458598869731, 1458598801908, 1458665557568 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/160/reviewer/12" ], [ "~Honglak_Lee1" ], [ "~Yaroslav_Bulatov1" ], [ "~Jack_Lanchantin1" ], [ "~Jack_Lanchantin1" ], [ "~Jack_Lanchantin1" ] ], "structured_content_str": [ "{\"title\": \"This paper proposes to use a multiple convolutional layers neural network with highway MLP to classify genomic sequences on the transcription factor binding site task, and also proposed to visualize the motif by utilizing the method proposed by Simonyan et al. (2013) to have a better interpretation of what the model learns.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes to use a multiple convolutional layers neural network with highway MLP to classify genomic sequences on the transcription factor binding site task, and also proposed to visualize the motif by utilizing the method proposed by Simonyan et al. (2013) to have a better interpretation of what the model learns. This paper moves a small step from Alipanahi et al. (2015) which uses a single convolutional layer for the TFBS classification task. This paper proposes to use a multiple convolutional layers neural network with highway MLP to classify genomic sequences on the transcription factor binding site task, and also proposed to visualize the motif by utilizing the method proposed by Simonyan et al. (2013) to have a better interpretation of what the model learns. This paper moves a small step from Alipanahi et al. (2015) which uses a single convolutional layer for the TFBS classification task.\\n\\nThe main novelty of this paper is to apply the recent progress of deep learning to biomedical tasks, such as CNN and Highway MLP, and achieves very good results on TFBS task. It is appreciated that the authors show that deep learning are able to provide good results in biomedical literature. However, the drawbacks of this paper are as follows:\\n\\n1. The novelty of the paper is limited. Extending the model of Alipanahi et al. (2015) to a deep model and utilizing the Highway MLP is good. However, there is no key improvement over Alipanahi et al. (2015). Hence, the novelty is not very significant. \\n\\n2. The description of the model proposed in this paper is not sufficient. As the paper focuses on the medical literature, it would be much better to provide more details of the methods, such as what is TFBS classification, motif, and the details of Equation (1). What is more, the paper did not explain what is the meaning of Figure 2(b), but only a \\\"A comparison of motifs can be seen in figure 2.\\\". What does Figure 2(b) compare? And what does the y-axis of Figure (2) mean? This might make the reader confuse. \\n\\nIn summary, I think the pros of the paper is that it applies some successful approaches in DL on biomedical tasks and achieve competitive results. However, the cons is that the novelty of the paper is not significant and clarity of the paper also needs to be improved.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Using deeper networks to improve transcription binding site prediction\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes to use deep neural networks to improve the classification performance for predicting transcription factor binding sites (TFBs). Compared to DeepBind (Alipanahi et al., 2015), the authors use a deeper network (combination of deeper convolutional layers and highway MLP layers) and show that this model improves the classification performance (measured in AUC). The paper also presents a visualization method similar to Simonyan et al. (2013). The main contribution of the paper can be summarized as: (1) improved classification performance of TFBs using deeper CNNs, and (2) a visualization method of motifs from the learned CNN model.\", \"novelty\": \"The technical novelty of the method is quite incremental. Compared to DeepBind, the main novelty is the use of deeper CNN layers and highway MLPs; the improvement of the AUC is ~1% (0.946 vs 0.931) which is moderate. In terms of visualization, the main difference is to optimize the input to maximize the output of the network via backprop (Equation 1) instead of using actual samples.\", \"clarity_and_presentation\": \"The paper is quite clearly written (given page limit), but more details would be helpful for clarification.\", \"significance\": \"Although the technical novelty is incremental, there is a reasonable (although not big) improvement in performance. The proposed visualization method (which is not technically novel) can turn out to be useful in practice. This paper could contribute to more investigation and application of deep learning in the computational biology domain, which is under-explored compared to other areas (vision, speech, etc.). \\n\\nOverall, the paper would fall on the borderline; however, given the relative paucity of deep learning papers in computational biology, the paper may be worthy of presentation at the ICLR workshop.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"modest improvement on binding-site prediction state-of-the-art by using deeper network\", \"comment\": \"Authors train a deep network on a medical task with a modest improvement on state-of-the-art.\\n\\nThe task is as follows -- there are approximately 3M worth of characters in nucleotide sequences (one of A,C,G,T) and the task is to predict locations of one of 108 bindings sites. They train 108 neural networks (one for each class), with each neural network being a binary sliding window classifier that considers 20 characters at a time and outputs yes/no decision. Since the goal is detection of the transcription site, reported metric is AUC. \\n\\nThe baseline they compare to -- DeepBind -- is a 1-hidden-layer neural network which considered 24 characters at a time and obtained average AUC of 0.931 over all classes. Proposed network is 3 convolutional layers interleaved with pooling layesr followed by 5 \\\"highway network\\\" layers.\\n\\nDescription of architecture is not sufficiently clear. They say \\\"depth-5 fully connected highway network after the max-pooled output of the convolutional layers\\\", which conflicts with earlier description which said they there was no pooling after the last layer. A better version would give a concise specification of all layers in order, including number of parameters in each layer.\\n\\nThere's also a section on visualization, but I have no domain expertise in visualizing TFBS motifs, so can not comment on the utility there.\\n\\nSince it sets new state of the art in a medical task by applying standard technique from deep learning literature, I'm leaning towards borderline accept, as a way to increase visibility of deep learning techniques to medical community.\"}", "{\"title\": \"Response to comment of Yaroslav Bulatov\", \"comment\": \"Thank you for the review and helpful comments! We acknowledge the clarity of the architecture description as noted. The first 2 convolutional layers contain convolutions, and a length 2 max-pooling. The third convolutional layer contains convolution and then a max-pool across the entire temporal domain. We wanted to state that there is not a length 2 max pool after the 3rd convolution because there is instead an entire-length max-pool (this architecture is reflected in figure 1).\\n\\nAs noted in the comment above, the main goal of the visualization is to show that we are able to extract visualizations through optimization which are similar to the carefully tuned JASPAR database motifs. Figure 2(b) demonstrates that we are able to find similar visualizations for the 2 TFs shown.\"}", "{\"title\": \"Response to Reviewer 12\", \"comment\": [\"We would like to thank reviewer 12 for pointing out key aspects which may be unclear to readers.\", \"Response to drawback 1:\", \"The major novelty of our approach is to propose an optimization-based strategy to extract and visualize class-specific sequence patterns (i.e. motifs) for genomic sequence analysis. This model is generic for visualizing any sequence classification tasks. To our knowledge, there have not been previous works which visualize sequence classes.\", \"We want thank the reviewer for addressing the issue of abstract clarity. Accordingly, we revised our abstract into:\", \"\\u201cThis paper applies a deep convolutional/highway MLP framework to classify genomic sequences on the transcription factor binding site task. To make the model understandable, we propose an optimization driven strategy to extract \\u201cmotifs\\u201d, or symbolic patterns which visualize the positive class learned by the network. We show that our system, Deep Motif (DeMo), extracts motifs that are similar to, and in some cases outperform the current well known motifs. In addition, we find that a deeper model consisting of multiple convolutional and highway layers can outperform a single convolutional and fully connected layer in the previous state-of-the-art.\\u201d\", \"The closest related study by Simonyan et al. (2013) visualizes class-specific image patterns. However, searching class-specific sequence patterns in a high-dimensional discrete-feature space is largely different from the image case because the lack of smoothness among alphabet letters (or genomic characters in our case).\", \"In addition, the specific model we use for TBFS prediction is much deeper than Alipanahi et al. (2015). We want to show that model improvements which have shown state-of-the-art results in NLP tasks (e.g. Zhang et. al 2015), can also be applied to achieve state-of-the-art results in biomedical sequence tasks.\", \"Response to drawback 2:\", \"We want to thank the reviewer for pointing this out.\", \"In the first paragraph: we explained TFBS task as: \\u201cEach different TF binds to specific transcription factor binding sites (TFBSs) on the DNA sequence to regulate cell machinery. Accurately classifying and understanding the DNA subsequences that TFs bind to will allow us to better understand the underlying biological processes and potentially influence biomedical studies of human health.\\u201d\", \"To make the whole description more clear, we have revised the second sentence in the above paragraph into the following and uploaded the revised version. ==> \\u201cWe focus on the task of accurately classifying and understanding the DNA subsequences that TFs bind to, which will allow us to better understand the underlying biological processes and potentially influence biomedical studies of human health. \\\\footnote{This task classifies whether or not there is a binding site for a particular TF of interest when given an input DNA sequence.}\\u201d\", \"We have explained the concept \\u201cmotif\\u201d in the last sentence of the first page: \\u201cmotifs, consensus sequences which define the positive binding sites for a particular TF. \\u201c\", \"For Equation(1), we have explained: \\u201cP_+(S) as the probability of the input sequence S (matrix of input length \\u00d7 4 **(i.e. alphabet size)**) being a positive TFBS computed by the softmax output of our trained model ** for a specific TF **.\\u201d In the original draft we missed the terms inside the ** symbols. We have added them in the revised version.\", \"We explained the details of Figure 2 in its caption. For the caption of Figure 2(b) it states: \\u201c(b) Comparison of DeMo motifs vs JASPAR motifs for 2 different TFs. Motifs are shown using information content in bits.\\u201d The y-axis in figure 2(b) is bits, since motifs are typically visualized using Shannon entropy. The 2 motifs we used are GATA2 (left column of 2(b)), and NRF1 (right column of 2(b)). The key point of Figure 2(b) is to show that our motifs (bottom row) are similar by visual inspection to the known JASPAR motifs (top row).\", \"For the y-axis of Figure 2(a), we wrote \\u201cDifference in AUC Scores DeMo - DeepBind\\u201d. Because of the space limit, we can not add these descriptions into the main draft. But in the revised version, we add \\u201cDifference in AUC Scores DeMo - DeepBind\\u201d in the caption of Figure 2(a).\"]}", "{\"title\": \"Response to reviewer 10\", \"comment\": \"We would like to thank reviewer 10 for the very helpful comments. As alluded to, we hope that this work will spark a broader interest in computational biology applications in the ICLR community. We feel that an important aspect of this work is to show that the methods shown useful for NLP, speech, vision, etc. can also be applied to achieve state-of-the-art results in biological tasks.\"}" ] }
P7Vk63koAhKvjNORtJzZ
Lessons from the Rademacher Complexity for Deep Learning
[ "Jure Sokolic", "Raja Giryes", "Guillermo Sapiro", "Miguel R. D. Rodrigues" ]
Understanding the generalization properties of deep learning models is critical for successful applications, especially in the regimes where the number of training samples is limited. We study the generalization properties of deep neural networks via the empirical Rademacher complexity and show that it is easier to control the complexity of convolutional networks compared to general fully connected networks. In particular, we justify the usage of small convolutional kernels in deep networks as they lead to a better generalization error. Moreover, we propose a representation based regularization method that allows to decrease the generalization error by controlling the coherence of the representation. Experiments on the MNIST dataset support these foundations.
[ "rademacher complexity", "generalization properties", "representation", "lessons", "deep learning lessons", "deep learning", "deep learning models", "critical", "successful applications", "regimes" ]
https://openreview.net/pdf?id=P7Vk63koAhKvjNORtJzZ
https://openreview.net/forum?id=P7Vk63koAhKvjNORtJzZ
ICLR.cc/2016/workshop
2016
{ "note_id": [ "81DG5rwAAC6O2Pl0UVoQ", "k80zpw4PRHOYKX7ji4k6", "xnrQJo752I1m7RyVi32D", "0YrwD8NvXsGJ7gK5tRWJ" ], "note_type": [ "review", "comment", "comment", "review" ], "note_created": [ 1457655486603, 1458160091567, 1458160141775, 1457582541154 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/76/reviewer/10" ], [ "~Jure_Sokolic1" ], [ "~Jure_Sokolic1" ], [ "ICLR.cc/2016/workshop/paper/76/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"This paper studies the capacity control for Convolutional Neural Networks with bounded Frobinius norm using Rademacher Complexities. While some of the discussions are interesting, no proof is provided for the theorems presented in the paper. I accepted this if I know the authors will complete the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper studies the capacity control for Convolutional Neural Networks with bounded Frobinius norm using Rademacher Complexities.\", \"pros\": \"1- Studying the capacity of CNNs is interesting and insightful. The discussions about the effect of filter size and other structural hyper-parameters on the capacity of these networks is very interesting and helpful.\\n\\n2- A regularizer that penalizes the correlation on the representation layer and the connections to Rademacher complexities are both interesting.\\n\\nCons (and suggestions):\\n\\n1- The paper is 4 page long, almost 2 pages of which are abstract, introduction, background and previous works (up to equation 5) and the last page is dedicated to references so the main part of the paper is only about one page and I think authors could have clarified the discussions and experiments much better.\\n\\n2- Two theorems are presented but without any proof or even a sketch of the proof.\\n\\n3- While the section 3 is suggesting a regularization term based on the Rademacher complexity, the experiments are not related in anyway to the discussions on section 2 and there is a bit of disconnection here.\\n\\n4- The suggested regularizer in interesting to study but calculating the gradient for this regularizer seems very time consuming. A short discussion on the computational complexity of using this regularizer could be helpful here.\\n\\n5- The discussion on the ResNet ignores the fact that ResNet uses skip-layers which is not included in this analysis.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review response\", \"comment\": \"We would like to thank the reviewer for feedback and constructive comments. We provide answers to the Cons (and suggestion) below:\\n\\n1- Please note that the workshops contributions are limited to 3 pages + references. We understand the reviewers concern however, we felt that we need to include a proper discussion of the considered model and previous work in order to make the contribution self-contained. We plan to provide more details in the extended version of the paper.\\n\\n2- Since the proofs are relatively long we felt that there is no reasonable way to include them in this contribution, provided the space constraints. We will provide the proofs in the extended version of the paper.\\n\\n3- We felt that results in section 2 are well supported by the practical results in the literature. On the other hand, section 3 discusses a novel approach to regularization and is further supported by the experiments.\\n\\n4- This is a very important observation. In the practical implementation we used a stochastic gradient descent optimization algorithm and only applied the regularizer to the pairwise inner products between the points in a single batch, which is much smaller than the whole training set. That way the computational complexity does not increase drastically.\\n\\n5- This is a fair point. However, the term 2^L in the bounds provided in section 2 is due to the ReLU non-linearity. We argue that the ResNet, which also uses ReLUs will also exhibit the 2^L term in the bound because of this.\"}", "{\"title\": \"Review reponse\", \"comment\": \"We wish to thank the reviewer for the provided feedback and constructive comments. We provide the answers to the comments below:\\n\\ni) This is a very relevant observation. Indeed, a more correct statement would be: models with lower ERC are better provided that the model is sufficiently powerful to model the task at hand. However, to fully understand the trade-offs between the absolute error and the generalization error as a function of depth we would first need to have a better understanding of the underlying data model. This is beyond the goals of this work.\\n\\nii) We argue that \\u201corthogonality\\u201d of the representation (even for the points from the same class) leads to more robust representation. The method where the class membership of data points is considered seems like a very reasonable extension of the current regularizer.\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"Authors present novel and interesting results on empirical Rademacher complexity (ERC) of deep neural networks, building on recent work of Neyshabur et al. Specifically they present a new bound on ERC for deep convolutional networks and demonstrate that their bound is tighter than that of Neyshabur et al. They also provide a deep representation based bound on ERC which is then used to motivate a novel regularization approach that aims to orthogonalize representation for distinct data samples.\", \"few_aspects_in_which_i_feel_the_paper_could_be_improved\": \"i) Authors seem to be claiming that models with lower ERC are better. This is how use of smaller kernels in CNNs in justified. However, I think a key missing component of the discussion is absolute error (and not just generalization error) for models in a given model family, this has to be taken into account. This would then perhaps explain the apparent discrepancy of why ERC increases as network depth increases and yet those models empirically do quite well.\\nii) The novel regularizer based on coherence (orthogonality) of representation is nice but a little bit counter intuitive for elements that belong to the same class (for a classification network such as ones used in MNIST). Would it make sense to consider the class membership of data points? Any insights into this would strengthen the paper I believe.\\n\\nOverall I think this is a good paper, fairly clearly written and easy to follow. Should be published and discussed.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
81DD7ZNyxI6O2Pl0Ul5j
Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems
[ "Colin Raffel", "Daniel P. W. Ellis" ]
We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic "addition" and "multiplication" long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks.
[ "attention", "memory problems", "networks", "simplified model", "applicable", "neural networks", "demonstrate", "model", "synthetic", "addition" ]
https://openreview.net/pdf?id=81DD7ZNyxI6O2Pl0Ul5j
https://openreview.net/forum?id=81DD7ZNyxI6O2Pl0Ul5j
ICLR.cc/2016/workshop
2016
{ "note_id": [ "YW94w2X41HLknpQqIK3Q", "D1VBLkynVC5jEJ1zfER6", "YWDv81xLRcLknpQqIKjB", "vl6o1GJJpF7OYLG5in8M", "BNYzYZN7EC7PwR1riXv7", "xnrQAOKVPc1m7RyVi320", "4QnROAyoxIBYD9yOFqN7", "K1VgppYmZc28XMlNCVpx", "4Qy5VNKgBCBYD9yOFqMy" ], "note_type": [ "comment", "comment", "comment", "comment", "review", "comment", "comment", "review", "review" ], "note_created": [ 1458128973218, 1458129441190, 1459133212702, 1459133035869, 1458116854804, 1458129266086, 1459133309395, 1457652270628, 1457911242909 ], "note_signatures": [ [ "~Colin_Raffel1" ], [ "~Colin_Raffel1" ], [ "~Colin_Raffel1" ], [ "~Colin_Raffel1" ], [ "ICLR.cc/2016/workshop/paper/7/reviewer/10" ], [ "~Colin_Raffel1" ], [ "~Colin_Raffel1" ], [ "ICLR.cc/2016/workshop/paper/7/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/7/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Thanks! Help clarifying?\", \"comment\": \"Hi, thank you for your review! I appreciate your point that saying that our approach handles long term dependencies may not be the best way to describe its behavior and capabilities. On the one hand, our approach has no notion of temporal order, but on the other hand we do consider the hidden state sequence h_t as a memory which our attention mechanism can address. This allows the output to consider different points in the memory when it is computing its output. As you suggest, the RNN is much more generic, but the idea behind this paper is to argue that for some tasks involving sequential data (such as these toy problems), order is less important than being able to refer to vastly different time instances in the input sequence. Do you have any suggestions for us to clarify this point? Based on your comments, we will take much more care to not compare it to an RNN directly, or without explicitly reiterating its drawbacks. Regarding your comment that \\\"it performs almost as well as a simple baseline that just sums all the inputs.\\\", I think there may be a misunderstanding here - the unweighted average approach does not perform nearly as well, because it was not able to solve the toy problems for all sequence lengths and was unable to solve the problems for varying sequence lengths. How can we better communicate this? Thank you again!\"}", "{\"title\": \"Thanks! We will make your proposed updates.\", \"comment\": \"Hello, thanks for your thorough review! Addressing your comments directly -\\n\\n\\\"I would have liked a more elaborate discussion of the weaknesses of the approach.\\\" - That makes sense, we will do that. Are you referring to the fact that we should be more explicit about the drawbacks of ignoring temporal order completely?\\n\\n\\\"The only other major flaw, the lack of experiments that show that the method generalises to more complicated problems is not an issue for a workshop track IMHO.\\\" Yes, we agree - we have some parallel work on real-world problems, but we decided that for a short workshop track paper it was best to keep it as simple and short as possible!\\n\\n\\\"Overall, I like the idea very much! I believe that this assumption holds for a huge variety of data sets, and that variations of the model can overcome it to some degree, e.g. by using a model that has a limited \\\"attention window\\\" spanning more than a single time step\\u2013e.g. using smaller RNNs or CNNs.\\\" Thank you! In fact, we have also explored your idea of using a limited \\\"attention window\\\" in our real-world problem work; of course, we omit that here because it is not needed for the toy problems.\"}", "{\"title\": \"Paper updated\", \"comment\": \"We just uploaded the camera-ready version to this submission page. We made our wording more clear about what our claims were (e.g. not temporal dependencies, but rather the ability to refer to different entries in very long sequences when computing the output). We also added a more explicit statement about how the toy problems are, in fact, easier with our model which is unaware of temporal order. Thank you again for your feedback!\"}", "{\"title\": \"Paper updated\", \"comment\": \"Hi, just letting you know that we have integrated your suggestions and corrections into the camera-ready version, which I have just uploaded on this submission page. Thanks again!\"}", "{\"title\": \"Nice workshop paper with an interesting direction\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper introduces a simplified attention mechanism by replacing the recurrent attention model with a feed forward one. It is shown that this is sufficient for some of the pathological long term problems used to evaluate the long term capabilities of RNNs. The results are convincing in the sense that performance is greatly improved and that problems targeted by the model can be solved.\\n\\nI would have liked a more elaborate discussion of the weaknesses of the approach. Clearly, cases where attention requires\\n\\nWhile this clearly limits the power of the model, as more some patterns cannot be detected. The feed forward attention assumes that an optimal attention value can be estimated by the model without the context: this is clearly not the case for a large set of data sets, such as in NLP.\\n\\nThe only other major flaw, the lack of experiments that show that the method generalises to more complicated problems is not an issue for a workshop track IMHO.\\n\\nOverall, I like the idea very much! I believe that this assumption holds for a huge variety of data sets, and that variations of the model can overcome it to some degree, e.g. by using a model that has a limited \\\"attention window\\\" spanning more than a single time step\\u2013e.g. using smaller RNNs or CNNs. Therefore I would like to see the paper accepted to the workshop track so that the authors get a chance to discuss this ongoing work with other researchers. I am confident that this work can result in a method that is generally applicable to a wide range of problems.\\n\\n\\nMinors\\n\\n- Wrong citation way, use \\\\citep and \\\\cite correctly. (e.g. \\\"and exploding gradient problem Pascanu et al. (2012);\\\"\\n- References are used as nouns\\n- Use \\u201cEquation (1)\\u201d instead of \\u201cEquation 1\\\"\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"How can we better frame our approach?\", \"comment\": \"Thanks for your review! I appreciate your point that due to the fact that independence is assumed, it is not clear to say that our model handles long-term dependencies. What we meant by this statement was that the attention mechanism can treat the hidden state sequence h_t as a memory and address any locations in that memory when computing its output. Do you have any input as to how we could better communicate that? We agree that the toy problems are \\\"in fact easier to model if a network is a priori oblivious to any temporal ordering of the input sequence\\\", but that is in some sense exactly the point of the model - that certain problems (such as the toy problems, but also some real world problems as we have shown in parallel work) are easier to solve when the model ignores temporal order. How can we better frame this argument? Thank you again!\"}", "{\"title\": \"Paper updated\", \"comment\": \"We just uploaded the camera-ready version to this submission page. We took care to not compare it directly with an RNN, because as you said it is not an entirely valid comparison. We also made our wording more clear about what we claim to be the capabilities and potential applications of our model. Thank you again for your feedback!\"}", "{\"title\": \"The Idea has very limited applicability and can not be considered a method for handling long-term dependencies\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes a simplified version of an attention mechanism. The proposed mechanism summarizes the input sequence into a context by taking a weighted sum of its elements, where the weights are computed by a feed-forward network. The resulting context is further used by the same feedforward network.\\n\\nI do not think it is correct to say that the proposed approach actually handles long-term dependencies. It does solve the considered toy tasks, but it performs almost as well as a simple baseline that just sums all the inputs. As mentioned in the paper,\\nthe information about positions of the input elements relative to each other is completely discarded by the proposed method. Thus, I do not think that it is fair to compare it with RNN, as the latter are much more generic.\\n\\nWhile the approach may be suitable for some applications, I think that the presenting it as a method to handle long-term dependencies is wrong.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"There's an issue with how the authors have framed their approach and consequently evaluated their approach.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Capturing long-term dependencies is difficult when a problem requires temporal positions of input symbols to be exploited. From its definition, the proposed model assumes temporal independence among the input symbols (Eq. (1) and h_t = f(x_t)), and it's not supposed to capture well, if not at all, any interesting long-term dependencies in the input sequence.\\n\\nThis is not to say that the proposed approach will not be useful at all. Rather, it should be framed differently in a very different context. In this regard, the evaluation in this paper is quite meaningless, as both of the tasks (addition and multiplication) are commutative and in fact easier to model if a network is a priori oblivious to any temporal ordering of the input sequence. \\n\\nIn short, the argument for the proposed approach is wrong, and therefore, the evaluation is rather meaningless.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
3QxgvRAolhp7y9wltPg8
Neural Enquirer: Learning to Query Tables in Natural Language
[ "Pengcheng Yin", "Zhengdong Lu", "Hang Li", "Ben Kao" ]
We propose Neural Enquirer — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, Neural Enquirer is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both end-to-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.
[ "neural enquirer", "queries", "learning", "tables", "natural language", "kb tables", "model", "neural network architecture" ]
https://openreview.net/pdf?id=3QxgvRAolhp7y9wltPg8
https://openreview.net/forum?id=3QxgvRAolhp7y9wltPg8
ICLR.cc/2016/workshop
2016
{ "note_id": [ "r8lL3q1Jki8wknpYt548", "3Qx4ngyL0cp7y9wltPvM", "BNYVDXoYzt7PwR1riX1N" ], "note_type": [ "review", "comment", "review" ], "note_created": [ 1457486035282, 1457966679992, 1457677127498 ], "note_signatures": [ [ "~Olivier_Delalleau1", "ICLR.cc/2016/workshop/paper/82/reviewer/11" ], [ "~Pengcheng_Yin1" ], [ "ICLR.cc/2016/workshop/paper/82/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"A good workshop contribution\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This submission presents an end-to-end neural model with attention mechanisms that is able to answer natural language questions against a knowledge base, after being trained by gradient descent on a set of question / answer pairs. One key idea is the hierarchical combination of so-called \\\"executors\\\", each performing one step of the query.\\n\\nThis looks pretty cool and should definitely be accepted for the workshop track (interesting idea and promising results). I wonder if this model could be extended to answer questions that require combining answers from multiple rows? (ex: \\\"what is the average duration of olympic games?\\\") Right now it seems a bit limited since it can only find answers that can be found in a single row.\", \"minor_remarks\": [\"DNN_0 does not appear to be actually deep (it is made of a single tanh layer)\", \"F_T is not defined in eq. 1\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to Reviewer 12\", \"comment\": [\"Thank you for your insightful comments!\", \"Due to the limited space we cannot cover all the details in the extended abstract.\", \"First of all, we would like to remark that Neural Enquirer is targeting natural language (NL) question answering on tables. We provide SQL-like logical forms in the paper just to illustrate the semantics and complexity of our synthetic NL questions, and those logical forms are unknown by the model.\", \"[Why synthetic dataset]\", \"Our contribution is to explore the capabilities of an end-to-end, fully-neural system in natural language semantic parsing and symbolic query execution. As the first step, we focus on the algorithmic foundation of such an approach and conduct experiments on synthetic data as a proof-of-concept, which is a common practice adopted by previous works on neural-network-based table QA [Neelakantan et al., 2015].\", \"In our synthetic QA task, each of the 10 predicates has 3~4 NL patterns, whose combination yields reasonable coverages on representative types of questions that can be generated from the Olympic Games schema. We will release our dataset for future research.\", \"Additionally, since Neural Enquirer learns symbolic operations from training data, it requires a large training dataset. WikiTableQuestions has only 14K training examples, which is relatively small for our purpose. We are carrying out experiments on WikiTableQuestions and are getting promising results.\", \"[Comparison with SEMPRE]\", \"We train/evaluate SEMPRE using the official system and tuning scripts used in [Pasupat and Liang, 2015] (https://worksheets.codalab.org/worksheets/0xf26cd79d4d734287868923ad1067cf4c/), which is shipped with all the grammar/features and has been optimized by the authors for table QA scenario. This might be different with the official SEMPRE framework (https://github.com/percyliang/sempre).\", \"We find SEMPRE generates too many candidate logical forms for complex NL queries like WHERE_SUPERLATIVE and NEST, which makes it difficult to identify the correct ones and rank them to the top. This might be due to the intractable search space incurred by the flexibility of the float parsing algorithm. We show that SEMPRE handles simple queries as well as Neural Enquirer, but our model excels in answering complex queries.\", \"[Why end-to-end neural network training]\", \"We are especially interested in the capabilities of a neural system in learning symbolic operations without a formal specification in an end-to-end, data-driven fashion. Our experiments on a synthetic dataset demonstrate that Neural Enquirer is capable of learning to execute compositional NL queries up to a fairly high level of complexity.\", \"[Which queries can be handled by the approach and which cannot]\", \"Neural Enquirer learns to execute queries purely in a data driven approach, and could answer a large variety of NL questions involving compositional \\u201cselect/where/argmax/argmin\\u201d operations. But our current implementation cannot handle arithmetical queries like \\u201cwhat is the SUM of all numbers of participants\\u201d, which we leave as future work.\", \"[Misc]\", \"We will revise some of the claims we made and add more related works/references.\"]}", "{\"title\": \"This paper presents a neural network architecture that can take in natural language utterances and \\\"execute\\\" them to answer questions on tables. Preliminary experiments on a synthetic dataset are provided. The idea of learning to execute SQL-like queries on tables using a neural network is intriguing, but I think the paper suffers from several weaknesses, detailed below.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"First, it does not clearly state what problem end-to-end neural network training of querying is solving, and presumes that it desirable. It is not clear that this approach is really a viable option for much larger and more complex queries. The paper also does not explicitly say which queries can be handled by the approach and which cannot (I doubt you can do SQL statements of arbitrary depth).\\n\\nThe synthetic dataset is not convincing from an NLP perspective, because the core difficulty in language understanding is handling the linguistic variation, of which there is presumably little of in the synthetic dataset. It's unclear whether solving this dataset will actually help with real datasets like WikiTableQuestions. Furthermore, the details of how the natural language for the QA task was generated is not clear. From Table 1, I can't tell how much linguistic variation there is, which has a strong influence on how difficult the task is.\\n\\nThe comparison with SEMPRE is unsatisfying. There are no details on how SEMPRE was used. SEMPRE is a semantic parsing framework which does not specify the features, the grammar, etc., which have a huge effect on performance. It's like saying that you used a neural network without specifying the architecture. Also, is SEMPRE not working well because it needs to consider too many hypotheses? Presumably if computation weren't the issue, SEMPRE would be better since the underlying function that one is trying to learn is actually a logical one, so SEMPRE would be a better fit.\", \"the_first_sentence_is_a_bit_off\": \"there is much work on question answering on knowledge bases prior to the cited works. Even without going all the way back to classic AI systems such as LUNAR and CHAT-80, it would be good to mention classic statistical semantic parsing methods (Zelle & Mooney, 1996; Zettlemoyer & Collins, 2005), etc.\", \"the_last_sentence_of_the_first_paragraph_makes_very_little_sense\": \"\\\"This approach, however, is greatly hindered by the fact that traditional semantic parsing mostly involves rule-based features and symbolic manipulation, leaving only a handful of tunable parameters to cater to the great flexibility of natural language.\\\" One might argue that traditional semantic parsing suffers from combinatorial explosion. There are many parameters in traditional semantic parsers, not just \\\"a handful\\\".\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
wVqz42X2wfG0qV7mtLqv
Convolutional Monte Carlo Rollouts for the Game of Go
[ "Peter H Jin", "Kurt Keutzer" ]
In this work, we present a Monte Carlo tree search-based program for playing Go which uses convolutional rollouts. Our method performs MCTS in batches, explores the Monte Carlo tree using Thompson sampling and a convolutional policy network, and evaluates convnet-based rollouts on the GPU. We achieve strong win rates against an open source Go program and attain competitive results against state of the art convolutional net-based Go-playing programs.
[ "game", "monte carlo tree", "program", "work", "convolutional rollouts", "performs mcts", "batches" ]
https://openreview.net/pdf?id=wVqz42X2wfG0qV7mtLqv
https://openreview.net/forum?id=wVqz42X2wfG0qV7mtLqv
ICLR.cc/2016/workshop
2016
{ "note_id": [ "ROVz0x0zKcvnM0J1IpPo", "vlpnQpkjRS7OYLG5inAy" ], "note_type": [ "review", "review" ], "note_created": [ 1456617903505, 1457550413631 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/130/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/130/reviewer/12" ] ], "structured_content_str": [ "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"I understand that most of this work was done before AlphaGo came out, but the submission deadline for the workshop was a few weeks later, so the new results need to be taken into account -- for example, the abstract can no longer read \\u201cattain competitive results against state of the art convolutional net-based Go-playing programs\\u201d.\\n\\nThompson sampling in the context of MCTS is not new (e.g. Bai et al NIPS\\u201913), even if I have not seen it applied to Go: if this is a key innovation, the results should have provided a direct comparison with UCT (for the same nets/rollouts).\\n\\nHeavier rollouts (with convnets) are plausible, but again, there is no direct comparison (using the same PPN and same computation budget) to the cheaper pattern-based rollouts.\\n\\nIn summary, the paper focuses on outdated performance metrics instead of scientific insight and clean comparisons of the proposed methods.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Neural Network + MCTS + Thompson sampling based approach to play Go\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper is directly related to several previously published papers that use deepnets+MCTS to play Go.\\n\\nThe new idea in this paper seems to be the use of Thompson sampling during exploration. But no quantitative comparisons are presented to really understand the effect of this choice.\\n\\nBesides that, this idea has been proposed in previous papers -- \\\"Thompson Sampling Based Monte-Carlo Planning in POMDPs\\\" and \\\"Bayesian Mixture Modelling and Inference based Thompson Sampling in Monte-Carlo Tree Search\\\".\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
OM0vWYM7Eup57ZJjtNql
Neural Generative Question Answering
[ "Jun Yin", "Xin Jiang", "Zhengdong Lu", "Lifeng Shang", "Hang Li", "Xiaoming Li" ]
This paper presents an end-to-end neural network model, named Neural Generative Question Answering (genQA), that can generate answers to simple factoid questions, both in natural language. More specifically, the model is built on the encoder-decoder framework for sequence-to-sequence learning, while equipped with the ability to access an embedded knowledge-base through an attention-like mechanism. The model is trained on a corpus of question-answer pairs, with their associated triples in the given knowledge-base. Empirical study shows the proposed model can effectively deal with the language variation of the question and generate a right answer by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform the embedding-based QA model as well as the neural dialogue models trained on the same data.
[ "model", "neural generative question", "question", "neural network model", "genqa", "answers", "factoid questions", "natural language", "framework" ]
https://openreview.net/pdf?id=OM0vWYM7Eup57ZJjtNql
https://openreview.net/forum?id=OM0vWYM7Eup57ZJjtNql
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAVgPXELACx0Wk76TAE5", "lx9AXr565H2OVPy8CvVB", "ANYjP03oPSNrwlgXCqOn", "Qn8D9gOY2FkB2l8pUYMZ", "P7VOwZNEvFKvjNORtJ1O", "q7kwYM53Qt8LEkD3t7vW" ], "note_type": [ "comment", "review", "comment", "review", "review", "comment" ], "note_created": [ 1458368628470, 1458076049545, 1458368548301, 1457332148426, 1457462380518, 1458366386406 ], "note_signatures": [ [ "~Jun_Yin2" ], [ "ICLR.cc/2016/workshop/paper/91/reviewer/10" ], [ "~Jun_Yin2" ], [ "ICLR.cc/2016/workshop/paper/91/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/91/reviewer/11" ], [ "~Jun_Yin2" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 12\", \"comment\": \"Thanks for your comment.\\n\\nK_Q, the number of candidate triples, is usually less than several hundreds in our data. In our experiment, the training of the model takes about two or three days using a single GPU, which is indeed slower than NRM model. We have plan to work on the neural QA model to answer more complex questions in the future.\"}", "{\"title\": \"Strong paper for the workshop session\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper proposes to do QA but also to learn to generate answers on top of retrieving the correct answer. The paper is strong both in the retrieval part than in the generation. The fact that the model can switch from knowledge to language words is rather neat. I think this will make a nice paper for the ICLR workshop session.\", \"few_questions\": [\"Will the dataset be released?\", \"Since not all models are generating answers, how is the accuracy computed in Section 3? By comparing the objects of the candidate triple?\", \"For the generation of answer, is the z variable controlling the switch between language and KB words learned directly end-to-end or is there a pre-training or an intermediate supervision of any sort?\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to Reviewer 10\", \"comment\": \"Thank you for the comments.\\n \\nWe plan to make the data publicly available for research soon.\", \"about_the_evaluation_in_section_3\": \"for the models that only return triples, we check the correctness of the best matched triples. For the models that generate answering sentences, we check the correctness as well as the fluency of the generated answer.\\n \\nz, as a latent variable, is inferred automatically in end-to-end learning of model, requiring no pre-training or additional labels. In other words, it is clear whether a word in answers belongs to common vocabulary or KB vocabulary (which is question specific) , or both. If a word is only in KB vocabulary, for example, \\u201c2.29m\\u201d, the language part of the likelihood will be automatically zero, and vice versa.\"}", "{\"title\": \"Clear paper with good experimental results\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper addresses the task of generating natural language answers to simple factoid questions. It presents an end-to-end neural network model that can search and represent related factors to a natural language question in a knowledge base, and decode these factors using a recurrent neural network with the attention mechanism.\\n\\nThis paper is very clear and easy to understand. The task of generating natural language answers is new as far as I know. This paper addressed this task using an end-to-end neural network with trainable question interpreter, knowledge base enquirer, and answer decoder. The question interpreter and the knowledge base enquirer can be found in previous work. But it is somewhat novel to propose the answer decoder that can generate both KB words (i.e. words represent factors retrieved from a knowledge base) and common words (i.e. words used to connect KB words to form a natural language answer). The paper also validated the effectiveness of training all the three components in an end-to-end way.\\n\\nI am curious about the choice of K_Q for the knowledge base enquirer. How long does it take to train a model? It seems that it might be slow especially when K_Q is large. In addition, it would be great if the authors can extend the model to multiple factoid questions.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A good paper on natural language answer generation\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper introduces an end-to-end neural network model, in which natural language answers to simple factoid questions can be generated. The model consists of three sub-models, namely Interpreter, Enquirer and Answerer, along with an external knowledge base. The experimental results show that the proposed framework can achieve slightly above 50% of accuracy on a specific dataset, when generating answers based on the facts in the knowledge base.\\n\\nThe paper was written clearly in most of the places. I have a few minor comments and questions here.\\n\\n1. Fig 1: it'll be better to add Q, H_Q, r_Q in the diagram to make it more clear and consistent with the text description.\\n2. Last sentence on Page 3: \\\"correct patterns\\\" means \\\"correct answers\\\" or just correct pattern/type but not necessarily correct at the lexical level?\\n3. Will the dataset collected and processed in the work be shared with the research community?\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to Reviewer 11\", \"comment\": \"Thank you for the comments.\\n \\nFigure 1 is a high-level description of the model. Due to the limited space, we did not include other figures about the details of each component, which can be found in our arXiv paper: http://arxiv.org/abs/1512.01337\\n.\", \"last_sentence_on_page_3\": \"\\\"correct patterns\\\" here means the fluency and relevancy of the answers, but not necessarily correctness at knowledge level.\\n \\nWe plan to make the data publicly available for research soon.\"}" ] }
wVqzjWP0JfG0qV7mtLvp
Deep Bayesian Neural Nets as Deep Matrix Gaussian Processes
[ "Christos Louizos", "Max Welling" ]
We show that by employing a distribution over random matrices, the matrix variate Gaussian~\cite{gupta1999matrix}, for the neural network parameters we can obtain a non-parametric interpretation for the hidden units after the application of the ``local reprarametrization trick"~\citep{kingma2015variational}. This provides a nice duality between Bayesian neural networks and deep Gaussian Processes~\cite{damianou2012deep}, a property that was also shown by~\cite{gal2015dropout}. We show that we can borrow ideas from the Gaussian Process literature so as to exploit the non-parametric properties of such a model. We empirically verified this model on a regression task.
[ "deep matrix gaussian", "model", "distribution", "random matrices", "matrix variate", "neural network parameters", "interpretation", "hidden units" ]
https://openreview.net/pdf?id=wVqzjWP0JfG0qV7mtLvp
https://openreview.net/forum?id=wVqzjWP0JfG0qV7mtLvp
ICLR.cc/2016/workshop
2016
{ "note_id": [ "WLA6B5kBQc5zMX2Kf2LQ", "yovEYZmqYHr682gwszQv", "L7m9Bj6POhRNGwArs4Kv", "p8j4mNNM9unQVOGWfpGj", "Jy94B6xR2Uqp6ARvt5v5", "mO91jBZBLfj1gPZ3Ul92", "mOWZXDpoLtj1gPZ3Ul32" ], "note_type": [ "comment", "review", "comment", "comment", "review", "comment", "comment" ], "note_created": [ 1458743628811, 1456835781228, 1458743460095, 1457619193107, 1457645909345, 1457619290247, 1458743430231 ], "note_signatures": [ [ "~Christos_Louizos1" ], [ "ICLR.cc/2016/workshop/paper/195/reviewer/12" ], [ "~Christos_Louizos1" ], [ "~Christos_Louizos1" ], [ "ICLR.cc/2016/workshop/paper/195/reviewer/10" ], [ "~Christos_Louizos1" ], [ "~Christos_Louizos1" ] ], "structured_content_str": [ "{\"title\": \"Updated version of the workshop paper\", \"comment\": \"Please find an updated version of the workshop paper here: https://drive.google.com/file/d/0Bx3kAuASMMrnTmIzV255S3laM1k/view?usp=sharing\"}", "{\"title\": \"Review for Deep Bayesian Neural Nets as Deep Matrix Gaussian Processes\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors propose a deep Gaussian process (GP) model with a *linear* covariance function (eq. 7) and non-linear transformations between the GPs. The use of a linear covariance function connects the model to Bayesian neural networks (single layer with no non-linearities). The use of a non-linear transformation between the GPs corresponds to the non-linearity between a Bayesian neural network's layers. The GP output dimensions are correlated (through the use of matrix Gaussian distributions), and a sparse inducing point approximation is used to make the approximation efficient (following [Damianou and Lawrence]). The work offers an approach to connect Bayesian neural networks to deep GPs (with a certain imposed structure), following the line of work of [Gal and Ghahramani].\\n\\nThe paper is clear (to someone coming from the GP community at least), but seems to be mostly a composition of several existing works. The ideas discussed are interesting, and the assessment is sufficient for a workshop submission, but I would suggest to extend it for a conference submission (see below).\", \"some_comments_for_the_authors\": [\"For a conference submission, I would suggest to compare the model intrinsically with several changes, to see where the improvement comes from:\", \"with no output dependence (V=I)\", \"with no non-linearity between the layers (sigma=I and eg RBF covariance function instead of a linear one - the usual method with deep GPs)\", \"by optimising over \\\\tilde{A} and \\\\tilde{B} instead of putting a variational distribution over these (the usual approach in sparse GPs is to optimise the locations of the inducing points \\\\tilde{A} and optimise / solve analytically for \\\\tilde{B})\", \"Extrinsically, I would suggest to compare to the results of [Bui et al., see below]. Bui et al. collected results from many sources and evaluated various methods for inference in Bayesian neural networks and GPs.\", \"The model is not non-parametric. First, a linear covariance function is of finite-rank. Second, even without a finite-rank covariance function, the sparse input approximation makes this into a parametric approximation. You might want to look into the work of [Titsias] in this regard. Lastly, you lose the marginalisation property with multiple layers.\"], \"minor_comments\": [\"You actually use Multiplicative Gaussian Noise rather than \\\"dropout posteriors\\\"\", \"What number of inducing points was used in the experiments?\", \"You might want to cite [Gal and Turner, see below] which also approximate the Gaussian process by placing a Gaussian posterior distribution over the weight matrices.\", \"The words \\\"let's\\\", \\\"GP's\\\", \\\"its'\\\" are misspelt multiple times\", \"H is not defined in eq. preceding eq. 5\", \"The sentence \\\"This corresponds to samples from the marginal...\\\" is very long and difficult to parse.\"], \"pros\": [\"The experiments are of good quality for a workshop paper\", \"The idea of using inducing points with neural networks is intriguing (I've been working on it myself)\", \"The paper is interesting\"], \"cons\": [\"The assessment compares apples and oranges (the deep GP approximation is quite unrelated to the VI and dropout models compared).\", \"Time complexity of the model is O(K**3 + M**3) with K hidden units and M inducing points.\", \"the structure is simplified with diagonalisation assumptions\", \"The model is not non-parametric\"], \"references\": \"Bui et al., \\\"Deep Gaussian Processes for Regression using Approximate Expectation Propagation\\\"\", \"http\": \"//jmlr.org/proceedings/papers/v37/galb15.html\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Answer to reviewer 10 (part 2)\", \"comment\": [\"However, it'd be nice if the authors could expand on their role. Is their role purely computational, i.e. in terms of low-rank approximation to the covariance? Furthermore, in the deep GP [Damianou and Lawrence] the pseudo-data are variational parameters and only affect the approximation and not the model, therefore allowing the approach to remain non-parametric. It is not clear to me that this is the case in this paper: the pseudo-data do seem to change the model, as there is no underlying process (like the Gaussian process).\", \"The role of the pseudo-data is to both provide efficient sampling for the pre-activation latent variables and also maintain the GP-like properties of the original model. To further elaborate: a simple assumption for efficient sampling is to assume that the outputs of each layer are independent; in this way the row covariance matrix is diagonal and therefore we can cheaply get the square root. However this approach discards the (finite) GP properties of the Matrix Gaussian parametrization.\", \"Now if we assume that the real inputs to each layer are conditionally independent given the pseudo data then the input to each layer is of the form (\\\\tilde{A}, a), where \\\\tilde{A} are the pseudo-inputs and \\u2018a\\u2019 is a real vector input. Therefore, an exact sample of the joint distribution p(\\\\tilde{B}, b | \\\\tilde{A}, a) is (\\\\tilde{B}, b) where b ~ p(b | \\\\tilde{B}, \\\\tilde{A}, a) (note that we have to make the assumption that the amount of pseudo-data is less than the input dimensionality; in this way we ensure that the pseudo-data combined with a real input provide a positive definite covariance for the row covariance of p(\\\\tilde{B}, b | \\\\tilde{A}, a)). In this way we can maintain the properties of the original parametrization but we add the computational complexity cost of the inversion of Sigma_11 which is cubic on the amount of pseudo-data. However, in practice this does not incur a significantly extra cost as the pseudo-data pairs are usually a few.\", \"Also, can you please expand on the number of the pseudo data typically used and how their inclusion affects optimization/complexity in practice? How does that compare to the mean-field approach?\", \"The asymptotic computational complexity of a mean-field approach sampled \\u2018locally\\u2019 at the hidden unit level is O(K^2) for the mean and variance of each layer where K is the input and output dimensionality. The model proposed here also adds the inversion of Sigma_11 thus resulting into O(K^2 + M^3) complexity where M is the amount of pseudo-data for that layer. Since usually K>>M this does not incur a significantly extra cost.\", \"One other place where clarification would be good is the dropout posteriors (below eq. 7), which are only mentioned but not explained.\", \"Indeed we did not further explained them due to space constraints. The form of the dropout posterior used here is q(\\\\tilde{B}) = \\\\prod_{i=1}_{M}\\\\prod_{j=1}^{K} N(\\\\tilde{b}_{ij} | \\\\mu_ij, \\\\sigma_m^2_{i}\\\\sigma_k^2_{j}\\\\mu^2_ij) and similarly for q(\\\\tilde{A}) (where \\\\sigma_m is shared with q(\\\\tilde{B})).\", \"I also recommend to the authors to have a look at a recent paper by Bui et al., \\\"Deep Gaussian Processes for Regression using Approximate Expectation Propagation\\\".\", \"We were not aware of this work during the development of our method. We are investigating it now.\"]}", "{\"title\": \"Answer to reviewer 12 (part 1)\", \"comment\": [\"We would like to particularly thank the reviewer for the in-depth review that he/she provided. Before addressing each one of the comments we would like to comment on the following remark from the reviewer: \\\"seems to be mostly a composition of several existing works\\\". We argue that this is not the case for multiple reasons. First of all this is the first (according to our so far knowledge) application of the concept of pseudo data in the context of (Bayesian) neural networks, which, empirically, significantly increase performance. Secondly, this is also one of the first works that goes beyond the fully factorized assumption for the variational parameter posteriors of a neural network. Thirdly with the matrix Gaussian distributions we can straightforwardly introduce correlations among the hidden units and furthermore with approximations to the covariance we can greatly reduce the amount of parameters of the network. Finally the fact that we are working with the primal space, i.e. weight space, allow us to easily scale this model to large datasets via the use of mini-batches. This is in contrast to the original deep GPs [Damianou and Lawrence] where the amount of parameters grows linearly with the size of the data and even the \\u201cvariational auto-encoded deep Gaussian Process\\u201d which requires distributed computation for the evaluation of some terms in the likelihood in order to be scalable.\", \"We now continue in addressing each one of the concerns.\", \"with no output dependence (V=I)\", \"From our experiments it seems that correlations between the input/output dimensions do play a role (results with a rank one approximation to V and U are better than diagonal V and U). We did not experiment with an identity matrix for V though. We will do experiments with an identity matrix for V and U in the future.\", \"with no non-linearity between the layers (sigma=I and eg RBF covariance function instead of a linear one - the usual method with deep GPs)\", \"With our model we are not able to do that as we are \\u201cconstrained\\u201d to the linear kernel function since we are working in the primal (weight) space. In order to use an RBF covariance we will have to experiment with the original deep GP [Damianou and Lawrence] framework. We plan to directly compare against it in the future.\", \"by optimising over \\\\tilde{A} and \\\\tilde{B} instead of putting a variational distribution over these (the usual approach in sparse GPs is to optimise the locations of the inducing points \\\\tilde{A} and optimise / solve analytically for \\\\tilde{B})\", \"We found out that generally putting a distribution over \\\\tilde{A} and \\\\tilde{B} improves the generalisation properties of the model. Without them the model tends to be sometimes overconfident (although the degree of this effect seems to be mostly dataset dependent).\", \"Extrinsically, I would suggest to compare to the results of [Bui et al., see below]. Bui et al. collected results from many sources and evaluated various methods for inference in Bayesian neural networks and GPs.\", \"We were not aware of this work and indeed it makes sense to compare our results.\"]}", "{\"title\": \"Very interesting paper containing significant amount of work. A few concerns regarding clarity and claimed properties.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This is an interesting paper, which includes two key themes: firstly, how to improve the definition and inference in Bayesian Neural Networks. Secondly, how the employed modeling/inference choices make the proposed approach a specific case of a deep Gaussian process. The key idea employed to achieve the above, is to use the matrix variate Gaussian as a distribution over the whole weight matrix together with the variational reparametrization trick.\", \"summary_of_assessment\": \"Overall I enjoyed reading this paper. I have some concerns regarding the clarity and some claimed properties of the approach (see below), but overall it is conveying interesting ideas and the contained amount of novel work -including experiments- is larger than most workshop submissions. Therefore, the paper is well suited for the workshop.\", \"detailed_assessment\": \"The research area of Bayesian neural networks and deep Gaussian processes are currently attracting a lot of interest, which adds to the significance of this paper. The ideas behind the proposed framework are close to those of Gal & Ghahramani 2015. For this reason I expected a more thorough discussion on the qualitative difference of the approaches. \\n\\nThe use of the matrix variate Gaussian in the context of Bayesian NNs is a novel element in this paper, and results in the interesting property of correlations in the hidden units. The arguments in favour of its use are convincing, and the relation to deep GPs follows. However, I am not convinced that there is enough support to claim that the proposed model is non-parametric. It is true that the marginalization property of the Gaussian (and matrix variate) gives rise to a GP-like equation, but the proposed model does not inherently seem to be a \\\"process\\\", from the functional support point of view (referring to the mapping between the layers).\\n\\nThe inclusion of pseudo-data is novel and interesting in the context of NNs. However, it'd be nice if the authors could expand on their role. Is their role purely computational, i.e. in terms of low-rank approximation to the covariance? Furthermore, in the deep GP [Damianou and Lawrence] the pseudo-data are variational parameters and only affect the approximation and not the model, therefore allowing the approach to remain non-parametric. It is not clear to me that this is the case in this paper: the pseudo-data do seem to change the model, as there is no underlying process (like the Gaussian process).\\n\\nAlso, can you please expand on the number of the pseudo data typically used and how their inclusion affects optimization/complexity in practice? How does that compare to the mean-field approach?\\n\\nOne other place where clarification would be good is the dropout posteriors (below eq. 7), which are only mentioned but not explained.\\n\\nThe experiments are performed in comparison with related methods in the literature and are convincing. I also recommend to the authors to have a look at a recent paper by Bui et al., \\\"Deep Gaussian Processes for Regression using Approximate Expectation Propagation\\\".\", \"typos\": [\"\\\"presented in Gal & ...\\\" -> \\\"presented in (Gal & ...)\\\". Same typo in other citations too.\", \"Some typos with apostrophes: lets', GPs', its', ...\", \"Some refs need capitalization: bayesian -> Bayesian, gaussian -> Gaussian ...\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Answer to reviewer 12 (part 2)\", \"comment\": [\"The model is not non-parametric. First, a linear covariance function is of finite-rank. Second, even without a finite-rank covariance function, the sparse input approximation makes this into a parametric approximation. You might want to look into the work of [Titsias] in this regard. Lastly, you lose the marginalisation property with multiple layers.\", \"Yes, to be more precise our model has finite support for N_h datapoints for each layer (where N_h is the dimensionality of the input to the layer). However, do note that we can also view the \\\\sigma(x)U\\\\sigma(x)^T kernel as an approximation to a \\\"non-parametric\\\" kernel by employing Mercer's theorem. In our experiments the amount of pseudo-data, M, is always less than N_h therefore with the independence assumptions that we make (i.e. the inputs to each layer are conditionally independent given the pseudo data) the row covariance (the linear kernel) of the joint Gaussian output distribution p(\\\\tilde{B}, b_i) is composed from M + 1 <= N_h datapoints therefore ensuring that the M+1 inputs have indeed a positive definite covariance. As for the second point; indeed we are treating the pseudo data as parameters for optimisation and thus transform the model in a fully parametric one. However this transformation tries to maintain the Gaussian Process properties of each layer. As for the marginalization property; we did not mention that the model maintains *globally* the marginalization property, only locally within each layer. Furthermore, apart from the fact that we are using a linear kernel among layers (that has support for a maximum of N_h datapoints per layer) this model maintains the exact same properties as a deep Gaussian Process. For example, if we assume that the weight posterior has zero mean then the resulting distribution of the output of each layer, conditioned on the pseudo data, is the exact same equation as that of a conditional Gaussian Process.\", \"You actually use Multiplicative Gaussian Noise rather than \\\"dropout posteriors\\\"\", \"Indeed it can be also called multiplicative Gaussian noise. We used the term \\u201cdropout posterior\\u201d to be consistent with the nomenclature of [Kingma et al.].\", \"What number of inducing points was used in the experiments?\", \"The number of inducing points was relatively small: 5 for the wine dataset, 20 for the bigger year and protein datasets and 10 for the rest. For the input layer we set an upper bound to the amount of the inducing points, that of the input dimensionality.\", \"You might want to cite [Gal and Turner, see below] which also approximate the Gaussian process by placing a Gaussian posterior distribution over the weight matrices.\", \"Indeed this paper is also somewhat relevant to this work. It was cited in [Gal and Gharamani] therefore we did not include it here.\", \"The words \\\"let's\\\", \\\"GP's\\\", \\\"its'\\\" are misspelt multiple times. The sentence \\\"This corresponds to samples from the marginal...\\\" is very long and difficult to parse.\", \"Thank you very much for pointing these out. We fixed them in the manuscript.\", \"H is not defined in eq. preceding eq. 5\", \"Indeed this is a typo; instead of H we should have B.\", \"The assessment compares apples and oranges (the deep GP approximation is quite unrelated to the VI and dropout models compared).\", \"We think that the comparison between VI and dropout is valid as both treat the same problem as us: that of inference in Bayesian neural networks (and dropout [Gal and Ghahramani] in particular also treats it as a Gaussian process)\", \"Time complexity of the model is O(K**3 + M**3) with K hidden units and M inducing points.\", \"According to our analysis the time complexity is not O(K**3 + M**3). A typical variational Bayesian neural network with a fully factorized Gaussian posterior sampled ``locally'' [Kingma et al.] has asymptotic per-datapoint time complexity O(K^2) for the mean and variance in each layer. Our model adds the extra cost of inverting Sigma_{11}^{-1} that has cubic complexity with respect to the amount of pseudo-data M. Therefore the asymptotic time complexity is O(K^2 + M^3)$ and since M is usually small this does not incur a significantly extra computational cost.\", \"the structure is simplified with diagonalisation assumptions\", \"There is nothing preventing us of using full covariance matrices in our model; only computational reasons when we have a lot of data. In fact in the experiments we used diagonal matrices with rank-1 corrections (e.g. V^{1/2} = diag(v) + uu^T), which correspond to non diagonal covariance matrices.\"], \"references\": \"Kingma et al., \\u201cVariational Dropout and the Local Reparametrization Trick\\u201d\", \"http\": \"//arxiv.org/abs/1506.02557\"}", "{\"title\": \"Answer to reviewer 10 (part 1)\", \"comment\": [\"We would like to primarily thank the reviewer for the in-depth review and constructive criticism that he/she provided. We now continue in addressing each one of the concerns:\", \"The ideas behind the proposed framework are close to those of Gal & Ghahramani 2015. For this reason I expected a more thorough discussion on the qualitative difference of the approaches.\", \"Indeed we did not discussed the differences due to space constraints. Formally, Gal & Gharamani 2015 consider independent Gaussians for each column of the weight matrix (which in our case correspond to p(W)=\\\\mathcal{MN}(M, \\\\sigma^2 * I , I)) and do not model the covariance of the hidden units. Furthermore the approximating variational distribution is quite limited as it corresponds to simple Bernoulli noise and delta approximating distributions for the weight matrix: it is a mixture of two delta functions for each column of the weight matrix, one at zero and the other at the mean of the Gaussian. This is in contrast to our parametrization where we can explicitly learn the (possibly non-diagonal) covariance for both the input and output dimensions of each layer through the matrix variate Gaussian posterior. Finally, sampling in the Gal & Gharamani 2015 is done in the weight space and not the function space (as it happens in our model), thus preventing the use of pseudo-data.\", \"However, I am not convinced that there is enough support to claim that the proposed model is non-parametric. It is true that the marginalization property of the Gaussian (and matrix variate) gives rise to a GP-like equation, but the proposed model does not inherently seem to be a \\\"process\\\", from the functional support point of view (referring to the mapping between the layers).\", \"Strictly speaking the proposed model has finite support for datapoints in each layer due to the finite rank nature of the row-covariance: it is required that the amount of datapoints in each layer is less or equal to the dimensionality of the input so as to have a positive definite row covariance. However do note that the general form of the row covariance kernel K(x, y) = sigma(x) U sigma(y) can be seen as an approximation to a non-parametric kernel if we employ Mercer\\u2019s theorem. This approximation becomes tighter by increasing the size of each hidden layer.\"]}" ] }
lx9lNjDDvU2OVPy8CvGJ
Adaptive Natural Gradient Learning Based on Riemannian Metric of Score Matching
[ "Ryo Karakida", "Masato Okada", "Shun-ichi Amari" ]
The natural gradient is a powerful method to improve the transient dynamics of learning by considering the geometric structure of the parameter space. Many natural gradient methods have been developed with regards to Kullback-Leibler (KL) divergence and its Fisher metric, but the framework of natural gradient can be essentially extended to other divergences. In this study, we focus on score matching, which is an alternative to maximum likelihood learning for unnormalized statistical models, and introduce its Riemannian metric. By using the score matching metric, we derive an adaptive natural gradient algorithm that does not require computationally demanding inversion of the metric. Experimental results in a multi-layer neural network model demonstrate that the proposed method avoids the plateau phenomenon and accelerates the convergence of learning compared to the conventional stochastic gradient descent method.
[ "riemannian metric", "score", "adaptive natural gradient", "natural gradient", "metric", "powerful", "transient dynamics", "learning", "geometric structure", "parameter space" ]
https://openreview.net/pdf?id=lx9lNjDDvU2OVPy8CvGJ
https://openreview.net/forum?id=lx9lNjDDvU2OVPy8CvGJ
ICLR.cc/2016/workshop
2016
{ "note_id": [ "BNYVDRL04h7PwR1riX16", "2xwyMqP20cpKBZvXtQ9Z", "4QygOJ4GEhBYD9yOFqjD" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457676707599, 1457647194509, 1458142340116 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/86/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/86/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/86/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Derivation of natural gradient using score matching divergence. Nice theory, toy experiment only.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper shows how natural gradient updates can be computed using a metric derived from the score matching divergence. The same technique could be applied to other divergences. The theory results are very nice. The experimental results are for a toy problem only.\\n\\nThe paper clearly presents its ideas. There are some minor language issues, but they don't interfere with comprehension.\\n\\nAs the authors touch on in the conclusion, this technique would need to be combined with other techniques to be useful for large problems (eg block diagonal approximations to the inverse metric). Out of the box, the algorithm requires storing and manipulating the dense inverse of the metric, whose size is quadratic in the number of parameters.\", \"some_specific_comments\": \"the framework of natural gradient -> the natural gradient framework\\nand a two-layer model with the analytically -> and two-layer models with analytically \\nas the online learning algorithm such that $$, where -> for an online learning algorithm where $$, and where\\n\\nThe right side of the inline equation before equation 4 should be $A^{-1} - \\\\epsilon A^{-1} B A^{-1}$.\\n\\nmaybe \\\"N variables\\\" -> \\\"N input dimensions\\\"?\\n\\nHow was the learning rate eta chosen for SGD and ANG? You shoudl do a grid search? A more compelling experimental section would also include a comparison to quasi-Newton methods. RMSProp would be a natural choice (or ANG+momentum vs. ADAM).\\n\\nI wonder if the MPF objective might also lead to a good metric. MPF consists of a Taylor series approximation to the KL divergence, so the corresponding metric might be very similar to the Fisher information, but without the intractable normalization constant.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting variant of the natural gradient for the Score Matching metric\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The natural gradient has long been known to offer very interesting convergence properties (notably invariance to parametrization and convergence in fewer steps) compared to traditional gradient descent. However, natural gradient descent has not seen a wide adoption mainly due to its high computational cost. The update is essentially the same as in SGD but the gradient has to be multiplied (at each time step) by the inverse Fisher matrix which is a NxN matrix for a model with N parameters.\\n One important method to lower the computational cost associated with computing the natural gradient is the online adaptive natural gradient update which allows the computation of an on-line estimate of the inverted Fisher matrix (thus avoiding the need to estimate a full metric matrix at each step, and a full NxN matrix inversion at each step).\\n\\nThe present approach presents an adaptive online natural gradient algorithm for the Score Matching loss instead of the usual KL-divergence.\\n\\nCompared to the KL-divergence natural gradient, a score matching gradient may be applied in settings where the model has an intractable partition function and where the traditional natural gradient method cannot by applied.\\n\\nAside a few typos, the paper is quite clear.\\n\\nThe contribution is new.\\n\\nOn the down side, the experiments are very limited (one synthetic dataset), but they do demonstrate that the approach can be practical in at least some cases.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A novel algorithm in information geometry based on matching score: nice approach, preliminary experiments, and several possible applications\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The paper presents a new algorithm based on information geometric principles according to the theory of information geometry. in particular the authors introduce a new metric over the space of probability distributions, derived from the Hyvarinen divergence measure.\\nthe derived gradient algorithm represents an alternative to the natural gradient evaluated with respect to the fisher information matrix.\\nThe authors test the proposed approach for the training of a simple toy neural network and show its superiority compared to standard gradient descent.\\n\\nI think the paper is a good starting point for future works in the training of neural networks and also for other problems where natural gradient has been successfully applied in the past. The paper is well written, and I suggest to accept it for the conference.\", \"some_comments\": \"Page 1, second paragraph: you write \\\"When we estimate the parameter \\\\xi with a diverge D, its parameter space has the Riemannian metric matrix...\\\" it this this is not well written: the geometry comes from the choice of a specific metric, or from the choice of a diverge function, but it's not directly associated to the estimation procedure in my opinion. Please explain better what you mean.\\n\\nPage 1, last line: to be more clear I would write \\\"is composed of the derivatives of the log-likelihood differentiated with respect to..\\\" similarly i would sat that the fisher information metric is composed of the derivative of the log likelihood.\", \"page_2\": \"forth paragraph: \\\"the inversion of THE matrix\\\"\\n\\nPage 2, end of Section 2, can you be more precise when you express the dependence of the complexity on N. is this the number of inputs in the network?\\n\\nPage 2, Section 3: \\\"Of the proposed methods\\\" remove S\\n\\nPage 2, Section 3: \\\"N-dimensional probability variable\\\" this is not properly defined. i would say N-dimensional sample space, or N-dimensional random variable.\", \"same_paragraph\": \"\\\"the approximate metric over THE input data\\\"\", \"page_3\": \"Table: what about the time complexity of the algorithm per iteration? install SGD slower?\\n\\nPage 4 maybe remove A from 4th reference (A Philip David)\", \"figure\": \"why didn't you add also natural gradient with the fisher information matrix? you say this method is better, however you lack the comparison.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
MwVPvKwRvsqxwkg1t7kY
A Differentiable Transition Between Additive and Multiplicative Neurons
[ "Wiebke Koepp", "Patrick van der Smagt", "Sebastian Urban" ]
Existing approaches to combine both additive and multiplicative neural units either use a fixed assignment of operations or require discrete optimization to determine what function a neuron should perform. However, this leads to an extensive increase in the computational complexity of the training procedure. We present a novel, parameterizable transfer function based on the mathematical concept of non-integer functional iteration that allows the operation each neuron performs to be smoothly and, most importantly, differentiablely adjusted between addition and multiplication. This allows the decision between addition and multiplication to be integrated into the standard backpropagation training procedure.
[ "additive", "differentiable transition", "multiplicative neurons", "addition", "multiplication", "approaches", "multiplicative neural units", "fixed assignment", "operations", "discrete optimization" ]
https://openreview.net/pdf?id=MwVPvKwRvsqxwkg1t7kY
https://openreview.net/forum?id=MwVPvKwRvsqxwkg1t7kY
ICLR.cc/2016/workshop
2016
{ "note_id": [ "p8j40N7POFnQVOGWfpJm", "D1KNEQkLOf5jEJ1zfEBj", "MwnDKjJBmCqxwkg1t7PD", "oVg3LLM8pfrlgPMRsBOQ" ], "note_type": [ "review", "comment", "comment", "review" ], "note_created": [ 1457656621256, 1458575820796, 1458579237256, 1457659995353 ], "note_signatures": [ [ "~David_Duvenaud1" ], [ "~Sebastian_Urban1" ], [ "~Sebastian_Urban1" ], [ "ICLR.cc/2016/workshop/paper/153/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Wherein I review this paper\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The main idea seems sensible, and fairly well-explained. Having a differentiable generalization of both addition and multiplication seems like a useful tool to have in general.\\n\\nMy main fear is that the idea isn't novel, but I haven't seen it presented in an ML framework before.\", \"problems\": \"1) It's still not clear to me how to compute the proposed function - I assume it's done iteratively?\\n2) Part of the motivation is training speed, but the authors didn't measure wallclock time. This should have been included.\\n3) The use of \\\\psi and something like \\\\varpsi or \\\\varphi is confusing. Please use distinct letters for which we have names.\\n4) Is the domain restricted to the positive reals only when n = -1 (and we recover log(x)), or is it restricted whenever n < 0?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Response to the review by David Duvenaud\", \"comment\": \"Thank you for your comments.\\n\\n1) The definition for \\\\Psi given in (3) is indeed iterative. However, to obtain competitive performance in the experiments we use the procedure described below.\\n\\n2) We precompute exp^(n)(x) given by (6) for a sensible range of x and n and store the results in a lookup table. During runs of the neural net we use linearly interpolated values from this lookup table. CUDA GPUs provide very fast hardware primitives and a dedicated memory region (texture memory) for that purpose. Hence evaluating the transfer function (8) requires roughly three times (exp^(n_li), sigmoid, exp^(m_li)) the number of operations as for the standard sigmoid. However, in practice performance is better. During internal testing on a nVidia Quadro K2200 GPU we achieve 2/3 of the performance of a standard sigmoid network with the same number of units.\\nOf course this method requires that x and n are limited to a sensible range, i.e. input data must be normalized as is common practice in machine learning.\\n\\n4) Mathematically the domain becomes restricted to positive reals for n <= -1. \\nNonetheless, very large negative numbers for x < 0 already occur for n close to -1. A method to avoid this issue is to use complex numbers, further details are provided in our arXiv paper at http://arxiv.org/abs/1503.05724\"}", "{\"title\": \"Response to Reviewer 12\", \"comment\": \"Thank you for your comments.\\n\\nSince we implemented no provisions against overfitting in our preliminary experiments, the tanh network will indeed overfit if trained sufficiently long. \\nHowever, the test loss of the proposed exp^(n) network is still better when compared to the best test lost of the tanh network (roughly after 5,000 iterations). This would correspond to an early stopping criterion for the baseline.\\n\\nAlso from the training and test curves, it seems that the exp^(n) has some natural resiliency against overfitting, which could provide to be a useful property. We will assess overfitting resilency using more advanced method (weight regularization, dropout) in further experiments.\"}", "{\"title\": \"Interesting idea - develop further?\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper suggests using a differentiable function which can smoothly interpolate between multiplicative and additive gates in neural networks. It is an intriguing idea and the paper is well written. The mathematical ideas introduced are perhaps not novel (a cursory search seems to indicate that Abel's functional equation with f=exp is called the tetration equation and its solution called the iterated logarithm), but their use in machine learning seem to be.\\n\\nThe experiment section is weaker - the problem seems somewhat contrived and with few datapoints. exp^{(n)} get the best test loss but the worst training loss - are the baselines simply overfitting? (it appears so when looking at the tanh testing curve).\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
81DnLL9OEI6O2Pl0UV1w
Hardware-Oriented Approximation of Convolutional Neural Networks
[ "Philipp Gysel", "Mohammad Motamedi", "Soheil Ghiasi" ]
High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model approximation. In this paper we present Ristretto, a model approximation framework that analyzes a given CNN with respect to numerical resolution used in representing weights and outputs of convolutional and fully connected layers. Ristretto can condense models by using fixed point arithmetic and representation instead of floating point. Moreover, Ristretto fine-tunes the resulting fixed point network. Given a maximum error tolerance of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.
[ "ristretto", "approximation", "convolutional neural networks", "widespread usage", "cnns", "mobile devices", "hardware accelerators", "promising", "execution time" ]
https://openreview.net/pdf?id=81DnLL9OEI6O2Pl0UV1w
https://openreview.net/forum?id=81DnLL9OEI6O2Pl0UV1w
ICLR.cc/2016/workshop
2016
{ "note_id": [ "3QxoKp4MGup7y9wltPEg", "MwVY2A13BTqxwkg1t7PR", "6XArYQ9DYIrVp0EvsEBq", "gZ9vBw9rytAPowrRUAZL" ], "note_type": [ "review", "comment", "comment", "review" ], "note_created": [ 1458232264391, 1458266188869, 1458229389508, 1458184956466 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/136/reviewer/12" ], [ "~Philipp_Matthias_Gysel1" ], [ "~Philipp_Matthias_Gysel1" ], [ "ICLR.cc/2016/workshop/paper/136/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Interesting workshop paper\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes a framework for quantizing the weights & activations within models trained in the Caffe framework, dramatically reducing the memory and computation requirements for big convnets used for image recognition. This makes the models useful for deployment on mobile devices and other low-power platforms. \\n\\nThe system is well designed and achieves compelling results, across 3 different models (widely varying in size). The application is an important one.\", \"several_points\": [\"The quantizations are performed independently for each layer. How can you be sure that the errors won\\u2019t compound up, making the final outputs inaccurate.\", \"Fine-tuning is an obvious improvement, so it great that this is a future refinement.\", \"It would be good to show results for the latest ResNet and VGG models which use 3x3 kernels. It isn\\u2019t clear how these will compress. Also some implementations use the Winograd transforms, but might also affect the precision that you can get away with.\", \"In summary, it is an interesting piece of work and should be accepted to the workshop track.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Paper is updated with results and details on fine-tuning\", \"comment\": \"Dear Reviewer, thank you very much for your reply.\\n\\nIndependent quantization\\n---------------------------------\\nThis is an important and interesting question. Indeed, quantizing one network part affects the \\\"optimal\\\" quantization strategy of other network parts. As described in section 4, we first analyse the range of numbers (weights + outputs) to find the necessary number of bits in the integer part. We chose this strategy, which ensures no saturation happens. Other strategies might yield better results. We found for small networks, a thorough DSE yields slightly better results. However, for large networks, a thorough DSE would probably exceed the hardware developer's patience. Moreover, with fine-tuning in the loop, the partition of integer and fractional part becomes a little less important. Once Ristretto made its final choice of integer bits, it starts analyzing the required bit-width. Please let me know in case this doesn't answer your question yet.\\n\\nFine-tuning\\n---------------\\nWe added results on fixed point fine-tuning. We successfully fine-tuned 3 nets on the ImageNet data set. Moreover, the paper contains a detailed description how Ristretto fine-tunes discrete-valued weights.\\n\\nLatest ResNet and VGG\\n------------------------------\\nThanks for the great suggestion. We added 2 more nets to the paper. We now present fine-tuned AlexNet (CaffeNet), GoogLeNet and SqueezeNet, all of which classify images into the 1000 ImageNet categories. ResNet would definitely be very interesting too.\"}", "{\"title\": \"We add results on fine tuned GoogleNet and SqueezeNet\", \"comment\": \"Thank you very much for your suggestions and questions.\", \"recent_architectures\": \"--------------------------\\nAccording to your advise, we used our Ristretto framework to condense GoogleNet and SqueezeNet (http://arxiv.org/abs/1602.07360). Both nets were trimmed to 8-bit in convolutional and fully connected layers, as well as fine tuned in fixed point. Here we report top-1 classification accuracies on the ImageNet validation data set:\\nSqueezeNet, 32-bit floating point: 57.68%, 8-bit dynamic fixed point: 57.09% (the resulting net parameter size is below 2MB).\\nBVLC GoogLeNet, 32-bit floating point: 68.93%, 8-bit dynamic fixed point: 66.49%.\\nWe updated the paper with these numbers. Moreover, we will add a section on the fine tuning procedure.\", \"dynamic_fixed_point\": \"--------------------------\\nYou are right, most layer outputs are in a similar range. The same holds for parameters. The big advantage of dynamic fixed point is that we can use different numerical representations for the layer outputs than for parameters. While layer outputs can be relatively big, network parameters are much smaller and require more bits in the fractional part.\", \"batch_normalization\": \"--------------------------\\nI assume you were referring to batch normalization layers. The intermediate results in batch normalization (as well as local response normalization, LRN) span a wide dynamic range, which is the reason why most recently proposed FPGA implementations favor floating point arithmetic for these layers (or nets without normalization layers, such as VGG and SqueezeNet). Since convolutional and fully connected layers make up for the large part of arithmetic operations and layer parameters, we first focused on these layers. As a side note, the current version of Ristretto supports custom mini-floating point arithmetic, which can be applied to LRN layers.\", \"helper_tool_for_scientific_breakthroughs\": \"-----------------------------------------------\\nEven though there have been various proposals for network trimming, to the best of our knowledge, there is no open-source project that would allow for fast and in-depth analysis of different fixed point representations for deep CNNs. We are confident Ristretto will help researchers to speedup the development of hardware accelerator designs.\"}", "{\"title\": \"Offline model quantization tool, useful tool but not scientifically groundbreaking.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors present a framework that can quantize Caffe models into 8-bit and lower fixed-point precision models, which is useful for lowering memory and energy consumption on embedded devices. The compression is an iterative algorithm that determines data statistics to figure out activation and parameter ranges that can be compressed, and conditionally optimizes convolutional weights, fully connected weights and activations given the compression of the other parts.\\nThis work focuses on processing models already trained with high numerical precision (32 bits float) and compress them, as opposed to other work that tries to train directly with quantized operations.\\n\\nResults seem good (trimming AlexNet model from 32-bit floating point to 8 bits with only .3% degradation), however I am not familiar enough with this domain to know how this compares to other quantization work and cannot comment on originality.\\n\\nWhile this is a very useful tool to have for some people, it is not very significant from a scientific point of view.\", \"comment\": [\"The tested networks do not have batch-normalization layers I assume, batchnorm is pretty standard nowadays and thus dynamic fixed point may be much less useful when things are normalized. The paper would be stronger if it showed results for more recent architectures as well and answer this point.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ROVmN8wyOSvnM0J1IpNm
Understanding Very Deep Networks via Volume Conservation
[ "Thomas Unterthiner", "Sepp Hochreiter" ]
Recently, very deep neural networks set new records across many application domains, like Residual Networks at the ImageNet challenge and Highway Networks at language processing tasks. We expect further excellent performance improvements in different fields from these very deep networks. However these networks are still poorly understood, especially since they rely on non-standard architectures. In this contribution we analyze the learning dynamics which are required for successfully training very deep neural networks. For the analysis we use a symplectic network architecture which inherently conserves volume when mapping a representation from one to the next layer. Therefore it avoids the vanishing gradient problem, which in turn allows to effectively train thousands of layers. We consider highway and residual networks as well as the LSTM model, all of which have approximately volume conserving mappings. We identified two important factors for making deep architectures working: (1) (near) volume conserving mappings through $x = x + f(x)$ or similar (cf.\ avoiding the vanishing gradient); (2) Controlling the drift effect, which increases/decreases $x$ during propagation toward the output (cf.\ avoiding bias shifts);
[ "deep networks", "volume", "volume conservation", "deep neural networks", "residual networks", "mappings", "new records", "many application domains", "imagenet challenge" ]
https://openreview.net/pdf?id=ROVmN8wyOSvnM0J1IpNm
https://openreview.net/forum?id=ROVmN8wyOSvnM0J1IpNm
ICLR.cc/2016/workshop
2016
{ "note_id": [ "GvV1QJJkoc1WDOmRiMQE", "nx9215vvoF7lP3z2iomr", "VAVwRgg9oFx0Wk76TAQv" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457573170175, 1457645935173, 1457649525786 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/188/reviewer/10" ], [ "~Philemon_Brakel1" ], [ "ICLR.cc/2016/workshop/paper/188/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Interesting topic\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This is an important topic: models with linear pathways have been a major part of recent successes in deep learning. The factors identified sound interesting. I'd be curious to learn more; it sounds like a good workshop poster.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper presents interesting results and an interesting hyopthesis for the success of certain neural network architectures. I didn't find the theoretical motivation very convincing and the experiments are limited. Nonetheless, I consider this paper a worthy contribution to the workshop.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes that deep neural network architectures that work well in practice, mainly do so because of their volume preserving properties. Inspired by this, the authors propose a neural network architecture that is volume preserving by design. According to their hypothesis, this type of network can still be trained successfully after stacking hundreds of layers.\\n\\nI found the paper pleasant to read and very clear. The idea is relatively simple, but from a practical point of view, this can actually be considered a positive attribute.\\n\\nI found the theoretical motivations for the importance of volume preservation unconvincing. A mapping that preserves volume, does not necessarily preserve the norm of the vectors it acts upon. The matrix diag(5, 0.2) has a Jacobian with determinant 1, but repeatedly applying it to a vector of ones would still let the first element grow and the second element vanish at an exponential rate.\\n\\nAnother shortcoming of the paper is the absence of comparisons with the other architectures that are discussed. The empirical evidence for the volume preservation hypothesis would have been much stronger if the amount of success of the very deep networks could directly be related to the extent to which the volume preserving property holds for them. The highway networks and residual networks are only approximately volume preserving, so it would be very interesting to see if exactly volume preserving nets can be used to train even deeper networks. \\n\\nAll in all, I still find the results and ideas interesting enough for a workshop presentation.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting topic, but few results\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Based on older work by Deco & Brauer (1995), in this workshop submission the authors introduce Deep Volume Conserving Neural Networks, which can be trained with many layers thanks to triagonal weight matrices. The authors offer a perspective on recent related work (Highway Nets, Residual Nets).\", \"pros\": \"interesting topic, clearly written.\", \"cons\": \"some concerns over the lack of content.\\n\\nAs stated by the authors, the underlying idea is not necessarily new (there was also other recent work at this venue exploring similar ideas: NICE: Non-linear Independent Components Estimation, Dinh et al, 2015). The question thus is whether the presented architecture works well in practice, which is not explored much.\\n\\nThe authors present an experiment on a variant of MNIST. They provide evidence that many layers can be trained, but report that the nets overfit to the training data. Rather than restricting the connectivity of the networks further to reduce their complexity, as suggested by the authors, I would rather recommend to tackle more challenging datasets; after all, the advantage of many layers in a feedforward net should be more expressive power.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BNYAA7gNBi7PwR1riXzR
Guided Sequence-to-Sequence Learning with External Rule Memory
[ "Jiatao Gu", "Baotian Hu", "Zhengdong Lu", "Hang Li", "Victor O.K. Li" ]
External memory has been proven to be essential for the success of neural network-based systems on many tasks, including Question-Answering, classification, machine translation and reasoning. In all those models the memory is used to store instance representations of multiple levels, analogous to “data” in the Von Neumann architecture of a computer, while the “instructions” are stored in the weights. In this paper, we however propose to use the memory for storing part of the instructions, and more specifically, the transformation rules in sequence-to-sequence learning tasks, in an external memory attached to a neural system. This memory can be accessed both by the neural network and by the human experts, hence serving as an interface for a novel learning paradigm where not only the instances but also the rule can be taught to the neural network. Our empirical study on a synthetic but challenging dataset verifies that our model is effective.
[ "memory", "instructions", "neural network", "external rule memory", "learning", "essential", "success", "neural", "systems" ]
https://openreview.net/pdf?id=BNYAA7gNBi7PwR1riXzR
https://openreview.net/forum?id=BNYAA7gNBi7PwR1riXzR
ICLR.cc/2016/workshop
2016
{ "note_id": [ "6XAgmP33RtrVp0EvsEL4", "r8ljowKREf8wknpYt5yw", "L7VjNwg2vSRNGwArs4gK" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457896292912, 1457618684510, 1457685138456 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/112/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/112/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/112/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Nice ideas, but poor description of the model\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes a novel use of the external memory for recording rules about how inputs should be processed. A rule consists of an input and output sequences pair with option to have a common variable in them. Given an input sequence, the model first finds an applicable rule from its memory, and uses it to extract a substring from the input using a pointer network. Then, a output sequence is generated by an RNN conditioned on the target rule and the extracted substring. This way, it is possible to generalize to unseen rules during testing, although the performance was not as good as the seen rules.\\n\\nThe model description Section 2 was missing lot of details, especially the part about the decoder. Some symbols in equations 1, 2 was not defined in the text, which made it hard to follow what is exactly happening. The subsection \\\"hybrid encoder-decoder\\\" definitely needs more explaining or equations. It would be nice to have another figure about the decoder similar to figure 2.\\n\\nAlthough the task considered is artificial, I think it is good start and exposing a flaw in current seq2seq architectures. However, it feels like the proposed model is tailored to this specific task, so might have a problem generalizing to different types of rules. \\n\\nAn idea of using the memory as a repository of learned skills is different from recent papers about external memory, and definitely an interesting research direction. However, the paper needs more clear and detailed description of the model to be accepted.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Some good ideas, but clearer explanation of the model is needed\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes conditioning sequence to sequence models based on rules stored in an indexible differentiable memory (basically attention). The idea is interesting, although somewhat encompassed by the more general Neural Programmer-Interpreters of Reed and De Freitas (http://arxiv.org/abs/1511.06279), although that paper perhaps had not been accepted to ICLR at submission time for this one. The two main problems with this paper, which tamper my enthusiasm, are that the model is not as clearly defined as it could be in section 2. The equations suffer from not have symbols consistently defined in the text, and an example could be given (other than in the figures). Mapping the mathematical description of the model to the figures (esp. figure 1) is not evident.\\n\\nRegarding the evaluation, the task is synthetic and fairly small scale, so I would tone down the claims to generality of the model on its basis made in the second paragraph of section 3. A normal seq2seq baseline would have been nice. The results for the baselines are somewhat surprising (0% for the pointer network baseline). The no-rule part of the task is just copying, no? It is not clear why simple pointer networks cannot perfectly solver this, since an LSTM can with the constraint on sequence lengths being similarly distributed between training and testing. It is unclear why DNN underperforms an inner product similarity metric.\\n\\nOverall, this is not a bad submission for a workshop paper, but I would have liked clearer explanation of the model and some indication of what further experiments could be performed to test it.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Nice idea, but hand-engineered to match an artificial task\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes a twist on the sequence-to-sequence learning paradigm in which an external rule-set specifies how to generate responses for inputs matching the rules. Furthermore, a recurrent encoder-decoder network with attention mechanism over the rule-set is presented as a solution to the proposed problem. The paper demonstrates that the architecture can learn to transduce sequences using the rule-set and that it is somewhat able to generalize to the case when new rules are entered into the rule-set (which causes a drop from 90% to 69% accuracy).\\n\\nThe rules consist of a pattern and result, the pattern contains a string to be matched to the input and optionally a wildcard. The result contains a new string to be written and optionally a symbol to be replaced with all words matched to the wildcard.\", \"the_model_merges_three_concepts\": \"the attention mechanism of Bahdanau et al \\\"Neural machine translation by jointly learning to align and translate\\\", the external memory from Weston et al \\\"Memory networks\\\" and the ability to indicate individual entries in the input sequence from Vinyals et al., \\\"Pointer networks\\\". The main novelty of the paper is the task which promises to blend the power of recurrent neural network transducers with hand-engineered rule sets.\\n\\nThe model description is hard to comprehend. It would be helpful to expand it and write equations for all model parts.\", \"some_technical_problems_in_the_presentation_of_the_model\": [\"Symbols in equations are reused (e.g. the \\\"e\\\" in equation (1) and (2) refers to different functions), and are not used consistently (eq. (2) uses E to denote the rule-set, while equation (3) uses scriptR).\", \"It would be helpful to label signals in Fig. 2 by symbols used in eq. (2).\", \"The relationship between Fig 1. and Fig 2. is not clear. Maybe a box should be drawn in Fig. 1 indicating which part of the model is presented in Fig. 2?\", \"The chosen task (rewriting strings using auxiliary set of rules) seems to be difficult to solve with recurrent neural networks and the proposed architecture also struggles with it. The paper list various tricks to make training feasible, including extensive use of pointer networks to support the wildcard match in rules and introduce a pre-training step with extra supervision indicating which words match with the wildcard.\"], \"pros_and_cons\": [\"good idea to embed some form of human knowledge into neural transducers\", \"the proposed task seems very artificial and limited\", \"the proposed model essentially emulates an algorithm to solve string matching with wildcard capture, and requires extensive supervision to be trained. It is hard to see how the model can generalize to other tasks\"], \"minor_comments\": \"for a input sequence -> for an input sequence\\n\\\"a certain indicate vector...\\\" -> rewrite, the sentence is incomprehensible\\n\\\"T(.)is used to transfer examples to their corresponding rules\\\" -> did you mean to transform examples with rules? Please clarify\\n\\\"SRSS (...) has difficulty and scalability to extend to various...\\\" -> I presume you mean that the proposed task has the potential to mimic real-world tasks? Please clarify the sentence.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
q7kqBKMN2U8LEkD3t7Xy
Why are deep nets reversible: A simple theory, with implications for training
[ "Sanjeev Arora", "Yingyu Liang", "Tengyu Ma" ]
Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated. The current paper takes a more theoretical tack. It presents a very simple generative model for ReLU deep nets, with the following characteristics: (i) The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is $A$ then the reverse transformation is $A^T$. (This can be seen as an explanation of the old weight tying idea for denoising autoencoders.) (ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in real-life deep nets behave like random numbers. Under this assumption ---which is experimentally tested on real-life nets like AlexNet--- it is formally proved that feed forward net is a correct inference method for recovering the hidden layer. The generative model suggests a simple modification for training: use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of random-like deep nets; and that it helps the training. This extended abstract provides a succinct description of our results while the full paper is available on arXiv.
[ "generative model", "training", "deep nets reversible", "simple theory", "implications", "input", "correctness", "correct inference", "hidden layer", "deep nets" ]
https://openreview.net/pdf?id=q7kqBKMN2U8LEkD3t7Xy
https://openreview.net/forum?id=q7kqBKMN2U8LEkD3t7Xy
ICLR.cc/2016/workshop
2016
{ "note_id": [ "81rOkAP49c6O2Pl0UVE2", "k8WD1v14NuOYKX7ji40V" ], "note_type": [ "comment", "comment" ], "note_created": [ 1460649621649, 1460616887555 ], "note_signatures": [ [ "~Yingyu_Liang1" ], [ "~Raja_Giryes1" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the related works\", \"comment\": \"Thanks for point out the related papers! Both papers provide nice results about reconstruction while our focus is more on reversibility and connection to generative models. More precisely, we assume that the data are generated from the top hidden layer and would like to recover the hidden layer from the data (instead of reconstructing the data from the hidden layer). Also, we focus on the case when the generative model and the recovery procedure are coupled (ie, weight tieing).\"}", "{\"title\": \"related works\", \"comment\": \"There are two strongly related works to this one:\\nThe first is the one from ICML 2014\", \"http\": \"//arxiv.org/abs/1504.08291 (this is the iclr version: http://arxiv.org/pdf/1412.5896v3.pdf)\\nthat shows that it is possible to recover the input of a network's layer from its output (and therefore the input of the whole network from its output) using assumption of random weights in the network and that the data is low dimensional.\"}" ] }
L7VOOy8B6hRNGwArs4Bn
Robust Convolutional Neural Networks under Adversarial Noise
[ "Jonghoon Jin", "Aysegul Dundar", "Eugenio Culurciello" ]
Recent studies have shown that Convolutional Neural Networks (CNNs) are vulnerable to a small perturbation of input called "adversarial examples". In this work, we propose a new feedforward CNN that improves robustness in the presence of adversarial noise. Our model uses stochastic additive noise added to the input image and to the CNN models. The proposed model operates in conjunction with a CNN trained with either standard or adversarial objective function. In particular, convolution, max-pooling, and ReLU layers are modified to benefit from the noise model. Our feedforward model is parameterized by only a mean and variance per pixel which simplifies computations and makes our method scalable to a deep architecture. From CIFAR-10 and ImageNet test, the proposed model outperforms other methods and the improvement is more evident for difficult classification tasks or stronger adversarial noise.
[ "model", "convolutional neural networks", "cnns", "vulnerable", "small perturbation", "input", "adversarial examples" ]
https://openreview.net/pdf?id=L7VOOy8B6hRNGwArs4Bn
https://openreview.net/forum?id=L7VOOy8B6hRNGwArs4Bn
ICLR.cc/2016/workshop
2016
{ "note_id": [ "3Qx7joOmLCp7y9wltPvx", "K1VgzwpE4S28XMlNCVA7", "E8VYgYX7JH31v0m2iDPz" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1458061671344, 1457581442231, 1457630307056 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/116/reviewer/11" ], [ "~Jiashi_Feng1" ], [ "ICLR.cc/2016/workshop/paper/116/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"This paper studies the effect of additive noise combined with adversarial noise\", \"rating\": \"3: Clear rejection\", \"review\": \"This short paper studies how to improve the stability of CNN architectures to adversarial noise. For that purpose, it proposes a Gaussian additive noise model applied to each layer, and the result is combined with adversarial training on large scale classification.\", \"pros\": [\"The model is simple\"], \"cons\": [\"Unfortunately, this paper suffers from lack of quality, clarity, originality and significance.\", \"The paper does not clearly motivate what is stated in the title. My understanding after reading the paper several times is that the authors train using a noise model that is not the adversarial one, but a gaussian additive noise injected at each layer, but then evaluate the performance at test time using the adversarial noise. What is the motivation to use Gaussian noise then? Is there any principled reason to consider this stochastic model in order to approximate the adversarial noise?\", \"Section 2 presents a series of qualitative approximations to justify why one can use Gaussian noise after rectifications and poolings. The resulting noise model thus seems to be the superposition of gaussian noise injected at each layer. The statements involving the Central Limit theorem are too imprecise -- if one considers for example recent CNN architectures with 3x3 spatial kernels, it is unclear to me why the output of the ReLu is well approximated with a Gaussian, since what matters is not how many terms you average, but how many independent (or uncorrelated) terms you average.\", \"Section 3 presents the numerical experiments, but we see no error bars in the results. How statistically significant are these numbers?\", \"Also, I do not really understand the Imagenet results. How come the accuracy using purely Gaussian noise is significantly better than the error using Gaussian noise + adversarial noise, given that the validation set is corrupted with adversarial noise? This behavior is strange, and does not appear to happen in the cifar experiments.\", \"How do you interpret that the gaussian regularization indeed slightly better (if we believe the differences are statistically significant) when test examples are corrupted by adversarial noise, but slightly worse when no noise is added?\", \"Overall, my impression is that this paper does not give enough rigorous insights into the problem they are addressing, neither theoretical nor experimental.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper propose a systematic method, randomize the inputs by adding Gaussian noise, to robustify feedforward CNN to adversarial noise.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"This paper is investigating an interesting problem for deep learning, i.e. how to robustify a deep neural network. The paper is well written and easy to follow. The method proposed in this work is novel to me.\", \"Pros\", \"An interesting problem\", \"A feasible and efficient solution\", \"Cons\", \"There is no justification why adding Gaussian noise into the input is a better choice, compared with other random distribution. I understand Gaussian distribution is easy for computation. But it is not so straightforward why adding Gaussian randomness could improve the robustness of the method.\", \"The experiments do not show benefits of introducing randomness into inputs, for standard training/test setting. Compared with baseline and standard training + stochastic FF, the performance actually drops by more than 1% using the stochastic FF on CIFAR 10. Although certain robustness to adversarial noise is demonstrated, I am more glad to see the proposed method can improve the performance of standard setting and realistic problem.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"This paper proposes a method to makes CNNs more robust to noisy inputs.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes a new method to make CNNs more robust to adversarial noise by adding Gaussian noise to the input images and CNN layers. The paper is unclear in many parts making it difficult to determine the exact computation being proposed. The best best of my understanding, the authors are proposing adding Gaussian noise to the input images and then at each subsequent layer sampling from a Gaussian with the mean and variance computed from the mean and variance of that layers input. This is not clearly explained and it is also not clear how to train this model.\", \"pros\": [\"The stochasticity results in a slight improvement of the same deterministic network.\"], \"cons\": [\"The title and introduction suggest the authors are proposing a method to make CNNs robust to adversarial noise. However, adversarial examples are never used in the method and it seems to me that the method being proposed is perhaps just a general regularizing technique not specifically related to adversarial examples. This would be a fine contribution, but the way the method is motivated is very misleading.\", \"The proposed method performs significantly worse than Goodfellow et al.'s (2014) method of training with adversarial examples (when evaluated on adversarial examples, which is the setting in which the authors are claiming to address). Combing the proposed method with Goodfellow et al.'s (2014) method provides a very slight improvement however this could simply be due to the regularizing effects of adding stochasticity.\", \"Comparisons with other methods of introducing stochasticity to a network are missing. Some questions remain such as: what happens for noise distributions other than Gaussian? How does this compare to the simpler method of just adding noise to the input (and/or hidden units) and forward propagating as normal during training?\"], \"some_additional_minor_comments\": [\"typo: page one \\\"...during training but not has been applied...\\\"\", \"page 2 you say X in R^3, when I think you mean X in R. If R^3 was in fact meant then the channel index should be dropped I think?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
zvwDjZ3GDfM8kw3ZinXB
Doctor AI: Predicting Clinical Events via Recurrent Neural Networks
[ "Edward Choi", "Mohammad Taha Bahadori", "Andy Schuetz", "Walter F. Stewart", "Joshua C. Denny", "Bradley A. Malin", "Jimeng Sun" ]
Large amount of Electronic Health Record (EHR) data have been collected over millions of patients over multiple years. The rich longitudinal EHR data documented the collective experiences of physicians including diagnosis, medication prescription and procedures. We argue it is possible now to leverage the EHR data to model how physicians behave, and we call our model Doctor AI. Towards this direction of modeling clinical behavior of physicians, we develop a successful application of Recurrent Neural Networks (RNN) to jointly forecast the future disease diagnosis and medication prescription along with their timing. Unlike traditional classification models where a single target is of interest, our model can assess the entire history of patients and make continuous and multilabel predictions based on patients' historical data. We evaluate the performance of the proposed method on a large real-world EHR data over 260K patients over 8 years. We observed Doctor AI can perform differential diagnosis with similar accuracy to physicians. In particular, Doctor AI achieves up to 79% recall@30, significantly higher than several baselines. Moreover, we demonstrate great generalizability of Doctor AI by applying the resulting models on data from a completely different medication institution achieving comparable performance.
[ "doctor ai", "patients", "physicians", "clinical events", "data", "medication prescription", "ehr data", "electronic health record" ]
https://openreview.net/pdf?id=zvwDjZ3GDfM8kw3ZinXB
https://openreview.net/forum?id=zvwDjZ3GDfM8kw3ZinXB
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAV4vPjogix0Wk76TAkO", "XL9mPPjPVHXB8D1RUGM8", "jZ9XlpA4zfnlBG2Xfz10" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457713545113, 1458419416668, 1457656276994 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/138/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/138/reviewer/11" ], [ "~Greg_Corrado1" ] ], "structured_content_str": [ "{\"title\": \"An interesting take on an interesting subject\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper tackles disease progression modeling by training an RNN on an Electronic Health Record (ERH) dataset to predict disease diagnosis and medication prescription along with their timing.\\n\\nThis is an instance of a multilabel marked point process modeling task. The current two main classes of techniques used to solve the task are continuous-time Markov chain based models and intensity based point process modeling. The work presented in this paper distinguishes itself from these approaches by proposing a solution that is straightforward to generalize to nonlinear and multilabel settings and does not make assumptions about the data generation process.\", \"the_paper_makes_three_main_contributions\": [\"It claims to obtain good performance for recall @10 and @30 on the task.\", \"It proposes an efficient initialization scheme for RNNs using Skip-gram embedding, which improves accuracy and speed. (Note: it is unclear to me what speed means in this context; is it convergence speed?)\", \"It shows that the features learned by the model are useful in a transfer learning context, where the trained model is used to initialize a model trained on a smaller dataset coming from a different health institution.\", \"The task is clearly explained and contextualized, and the real-world benefits (facilitate patient-specific care and timely intervention, reduce healthcare cost) are well justified. The main contributions are well outlined.\", \"The paper claims that the model performs with similar accuracy to physicians, but doesn't seem to present evidence to back it up. From an outsider's perspective, I wonder why recall is the only performance measure considered. Aren't we also interested in reducing false positives? Still from an outsider's perspective, it's hard to evaluate how high of a bar the other baselines represent. It doesn't seem like any comparison is made with current approaches to solving multilabel marked point process modeling tasks.\"], \"in_summary\": [\"Well written\", \"Concrete real-world applicability\", \"Application of RNNs to a new class of problems\", \"Task clearly explained and well contextualized\", \"Claim that the model performs with similar accuracy to physicians does not appear to be backed up by evidence\", \"Does not appear to compare against current approaches to solving the task\", \"Unclear whether the performance results are for the training set or for a held-out test set\", \"I think that the work presented in this paper is novel and interesting enough that it should be accepted, despite the concerns I have with the performance measures.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents an applications of RNNs to predict \\\"clinical events\\\", such as disease diagnosis and medication prescription and their timing.\\n\\nThe paper proposes/suggests:\\n1. Applying an RNN to disease diagnosis, medication prescription and timing prediction.\\n\\n2. \\\"Initializing\\\" the neural net with skipgrams instead of one-hot vectors. However, it seems from the description that the authors are not \\\"initializing\\\", rather just feeding a different feature vector into the RNN.\\n\\n3. Initializing a model that is to be trained on a small corpus from a model trained on a large corpus works. Concludes: information can be transferred between models (read across hospitals).\", \"claims\": \"1. Better recall @10 and @30 on the task.\\n2. Improved speed due to the skipgram initialization (I'm assuming this is convergence speed)\", \"what_i_like_about_this_paper\": \"1. Well written.\\n2. It's a neat application. Practically minded and the conclusions would have practical consequences.\", \"what_i_think_can_be_improved\": \"1. More thorough experiments, comparisons to continuous time markov chain models and intensity point processes.\\n2. More detailed results: I would like to have a better view of the data, in particular, I would like an explanation for why \\\"most freq visits\\\" is performing so well.\\n3. A clearer description of the problem. For instance, the authors suggest that continuous time Markov models were used for similar tasks, but it seems like their task is discrete time. If not, how are precision and recall measured here.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"An excellent workshop-track paper on medical event prediction.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This submission describes an application of recurrent neural networks to sequence prediction in electronic health records. The core ideas and techniques are not novel, but the applications work itself is compelling and very interesting. This results are good, the baselines sensible, and the explanation clear. I think this kind of work is an ideal submission to the ICLR workshop track.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
gZ9OoJWPnsAPowrRUAoP
A constrained l1 minimization approach for estimating multiple Sparse Gaussian or Nonparanormal Graphical Models
[ "Beilun Wang", "Ritambhara Singh", "Yanjun Qi" ]
The flood of multi-context measurement data from many scientific domains has created an urgent need to reconstruct context-specific variable networks, that could significantly simplify network-driven studies. Computationally, this problem can be formulated as jointly estimating multiple different, but related, sparse Undirected Graphical Models (UGM) from samples aggregated across several contexts. Previous joint-UGM studies could not address this challenge since they mostly focus on Gaussian Graphical Models (GGM) and have used likelihood-based formulation to infer multiple graphs toward a common pattern. Differently, we propose a novel approach, SIMULE (learning Shared and Individual parts of MULtiple graphs Explicitly) to solve multi-task UGM using a $\ell$1 constrained optimization. SIMULE is cast as independent subproblems of linear programming that can be solved efficiently. It automatically infers specific dependencies that are unique to each context as well as shared substructures preserved among all the contexts. SIMULE can handle both multivariate Gaussian and multivariate Nonparanormal data that greatly relax the normality assumption. Theoretically we prove that SIMULE achieves a consistent result at rate $O(\log(Kp)/n_{tot})$ (not been proved before). On four synthetic datasets, SIMULE shows significant improvements over state-of-the-art multi-sGGM and single-UGM baselines.
[ "simule", "multiple sparse gaussian", "nonparanormal graphical models", "studies", "ugm", "multiple graphs", "flood", "measurement data", "many scientific domains" ]
https://openreview.net/pdf?id=gZ9OoJWPnsAPowrRUAoP
https://openreview.net/forum?id=gZ9OoJWPnsAPowrRUAoP
ICLR.cc/2016/workshop
2016
{ "note_id": [ "MwV0204Xycqxwkg1t7GG", "BNYzx9p0LU7PwR1riXvZ", "r8lKJXQW6H8wknpYt5G9" ], "note_type": [ "review", "review", "official_review" ], "note_created": [ 1458102857458, 1458175488928, 1458175364867 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/148/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/148/reviewer/11" ], [ "~Kevin_Patrick_Murphy1" ] ], "structured_content_str": [ "{\"title\": \"A new graphical model dealing with multi-view data\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposed a novel graphical model that jointly reasons models in a multi-task setting. The proposed work extended Cai et al. 2011 under multiple context. Theoretical convergence guarantee is given (proof is not provided). Experimental results over synthetic data shows the effectiveness and efficiency.\\n\\nOverall the paper is technically sound (correctness is not checked carefully). However, lack of proof and experiments on real data make it less convincing.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"multi-task graphical lasso with shared common graph\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper provides an extension to graphical lasso to handle the case where there is one graph per data source, plus a shared sparse graph. (Rather than maximize likelihood subject to a sparsity constraint, they minimize density subject to a likelihood constraint, but this seems like a minor technical detail.) The objective is convex, they solve it with linear programming. They show a toy experiment where they generate two GGMs, and sample data from them, and then try to recover the structure. Their method outperforms (in terms of the ROC curve for edge recovery) various other methods.\\n\\nAlthough the objective function is elegant and possibly novel, the contribution is still very small: there is no algorithmic novelty (standard LP solvers are used, no discussion of scalability), and only toy experiments on synthetic data are presented. Finally, I don't think this topic fits well with ICLR. (They have also submitted a version to ECML-PKDD 2016 journal-track, which may be a better fit.)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"multi-task graphical lasso\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper provides an extension to graphical lasso to handle the case where there is one graph per data source, plus a shared sparse graph. (Rather than maximize likelihood subject to a sparsity constraint, they minimize density subject to a likelihood constraint, but this seems like a minor technical detail.) The objective is convex, they solve it with linear programming. They show a toy experiment where they generate two GGMs, and sample data from them, and then try to recover the structure. Their method outperforms (in terms of the ROC curve for edge recovery) various other methods.\\n\\nAlthough the objective function is elegant and possibly novel, the contribution is still very small: there is no algorithmic novelty (standard LP solvers are used, no discussion of scalability), and only toy experiments on synthetic data are presented. Finally, I don't think this topic fits well with ICLR. (They have also submitted a version to ECML-PKDD 2016 journal-track, which may be a better fit.)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ROVmGqlgmhvnM0J1IpNq
Sequence Modeling with Recurrent Tensor Networks
[ "Richard Kelley" ]
We introduce the recurrent tensor network, a recurrent neural network model that replaces the matrix-vector multiplications of a standard recurrent neural network with bilinear tensor products. We compare its performance against networks that employ long short-term memory (LSTM) networks. Our results demonstrate that using tensors to capture the interactions between network inputs and history can lead to substantial improvement in predictive performance on the language modeling task.
[ "networks", "sequence", "recurrent tensor", "recurrent tensor network", "multiplications", "bilinear tensor products", "performance" ]
https://openreview.net/pdf?id=ROVmGqlgmhvnM0J1IpNq
https://openreview.net/forum?id=ROVmGqlgmhvnM0J1IpNq
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAVwE51RPix0Wk76TAQK", "k80YxLokNIOYKX7ji491" ], "note_type": [ "review", "review" ], "note_created": [ 1457650374184, 1457629014139 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/135/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/135/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Authors propose using an extension to LSTMs using bilinear tensor products and evaluate it on character-level language modeling task. The results don't seem to be convincing that the method is really useful.\", \"rating\": \"3: Clear rejection\", \"review\": \"Quick Googling of related methods pointed me to \\\"Modeling Compositionality with Multiplicative Recurrent Neural Networks\\\" by Irsoy & Cardie, which was published last year at ICLR. They also use bilinear tensor products in the context of recurrent neural networks. Authors of this paper extend the idea for LSTMs where each matrix product is replaced by bilinear tensor products.\\n\\nThe experiments were run on the w 100M bytes of English Wikipedia. The results seem to imply that the proposed method (called GRTN) is significantly better than regular LSTM. It appears, however, that the GRTN uses significantly more parameters than the other approaches which makes the comparisons not necessarily valid. What is more, the log likelihoods of all methods seem to be significantly worse than in \\\"Generating Text with Recurrent Neural Networks\\\" by Sutskever et al, which was published in 2011.\", \"pros\": [\"I haven't seen other people trying to use LSTMs with bilinear tensor products, which might be an interesting extension\", \"Cons\", \"the experimental section is very lacking\", \"the paper misses references to important related work\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"2: Strong rejection\", \"review\": \"This paper introduces the multiplicative (or tensor) recurrent neural networks for sequence modeling and does a preliminary evaluation on the task of language modeling.\\n\\nMain issue of the paper is the core idea (tensor RNN) is not novel, and there are no citations to the papers employing similar ideas:\\n\\n(1) Generating Text with Recurrent Neural Networks by Sutskever et al (ICML 2011)\\n(2) Modeling Compositionality with Multiplicative Recurrent Neural Networks by Irsoy & Cardie (ICLR 2015)\\n\\nCombination of bilinear product with the gated RNNs is, to my knowledge, novel, however the paper is not structured around that idea as its main contribution.\\n\\nAnother issue is the weakness of experimentation, as well as it being unfair in terms of parameter size (which is already addressed by the author). To my understanding, the author compares models with a single set of hyperparameters, therefore the results are also prone to the randomness due to this choice. There should be some degrees of freedom for hyperparameter tuning to reduce this randomness and make a fairer comparison.\", \"pros\": \"(1) Gated tensor RNN idea is novel.\", \"cons\": \"(1) The main contribution (tensor RNN) idea is not novel. Lack of citations to relevant papers that used tensor RNNs. (2) Experimentation is weak and unfair in terms of sizes of the models being compared. Thus the results are not conclusive or convincing.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
1WvOZmpo0HMnPB1oinGx
Distinct Class Saliency Maps for Multiple Object Images
[ "Wataru Shimoda", "Keiji Yanai" ]
This paper proposes a method to obtain more distinct class saliency maps than Simonyan et al. (2014). We made three improvements over their method: (1) using CNN derivatives with respect to feature maps of the intermediate convolutional layers with up-sampling instead of an input image; (2) subtracting saliency maps of the other classes from saliency maps of the target class to differentiate target objects from other objects; (3) aggregating multi-scale class saliency maps to compensate lower resolution of the feature maps.
[ "saliency maps", "multiple object images", "simonyan et al", "improvements", "cnn derivatives", "respect", "maps", "intermediate convolutional layers" ]
https://openreview.net/pdf?id=1WvOZmpo0HMnPB1oinGx
https://openreview.net/forum?id=1WvOZmpo0HMnPB1oinGx
ICLR.cc/2016/workshop
2016
{ "note_id": [ "5QzBNOovjCZgXpo7i3xE", "P7VOZyVqzuKvjNORtJ13" ], "note_type": [ "review", "review" ], "note_created": [ 1457663887282, 1457378518867 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/174/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/174/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"This paper contributes in the sense of obtaining high level object saliency map by using CNN and semantic information.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Pros:\\nNovelty&Significance: I think this paper is interesting in the sense of combining the semantic information into traditional saliency problems under the weakly supervised scenario. It computes local contrast with CNN feature and then output a semantic mask of the targeting object. Actually, I think it is a good complementary to current strong supervised saliency detection such as bounding box salient object detection, salient object segmentation.\", \"clarity\": \"The paper is clearly descriptive and easy to understand.\", \"cons\": \"It lacks of numerical comparison and evaluation, with some standard criteria. \\n\\n Currently, rather than saliency, it is more close to weakly supervised segmentation. So maybe the authors need to refer to some weakly supervised segmentation works using CNN, e.g. Chen et.al ICCV 2015. \\n\\n For general saliency. I am not clear whether it can transfer to unknown object domain, which could also be interesting.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper proposes a method to produce class-specific saliency maps by combining gradient-like values from different layers of the network.\", \"rating\": \"3: Clear rejection\", \"review\": \"The objective of this paper is to improve the saliency map generation approach of Simonyan et al. 2014.\", \"strengths\": [\"The output look significantly better than Simonyan et al. 2014\", \"The submission is well written and easy to follow\"], \"weaknesses\": [\"The paper contains no evidence or argument for why the proposed method produces maps that cover salient objects and not non-salient objects\", \"Even though some saliency benchmarks exist, the submission does not provide any numerical results\", \"The submission proposes a method that significantly improves the saliency maps produced by Simonyan et al. 2014. This reviewer, however, does not think that progress in this direction is generally useful. The task of image saliency is not well-defined, in general or in this paper. It's not clear that the proposed method captures \\\"saliency\\\" even if one tries to define it: it is not demonstrated that the method highlights objects that are considered salient while leaving objects that are not considered salient dark. There do exist image saliency benchmarks (e.g., salicon), but no numerical results are presents so it's not clear that the method improves anything.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJGfklStl
Exploring the role of deep learning for particle tracking in high energy physics
[ "Mayur Mudigonda", "Dustin Anderson", "Jean-Roch Vilmant", "Josh Bendavid", "Maria Spiropoulou", "Stephan Zheng", "Aristeidis Tsaris", "Giuseppe Cerati", "Jim Kowalkowski", "Lindsey Gray", "Panagiotis Spentzouris", "Steve Farrell", "Jesse Livezey", "Prabhat", "Paolo Calafiura" ]
Tracking particles in a collider is a challenging problem due to collisions, imperfections in sensors and the nonlinear trajectories of particles in a magnetic field. Presently, the algorithms employed to track particles are best suited to capture linear dynamics. We believe that incremental optimization of current LHC (Large Halidron collider) tracking algorithms has reached the point of diminishing returns. These algorithms will not be able to cope with the 10-100x increase in HL-LHC (high luminosity) data rates anticipated to exceed O(100) GB/s by 2025, without large investments in computing hardware and software development or without severely curtailing the physics reach of HL-LHC experiments. An optimized particle tracking algorithm that scales linearly with LHC luminosity (or events detected), rather than quadratically or worse, may lead by itself to an order of magnitude improvement in the track processing throughput without affecting the track identification performance, hence maintaining the physics performance intact. Here, we present preliminary results comparing traditional Kalman filtering based methods for tracking versus an LSTM approach. We find that an LSTM based solution does not outperform a Kalman fiter based solution, arguing for exploring ways to encode apriori information.
[ "particles", "algorithms", "role", "deep learning", "particle tracking", "high energy physics", "lstm", "solution", "collider", "challenging problem due" ]
https://openreview.net/pdf?id=SJGfklStl
https://openreview.net/forum?id=SJGfklStl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkzPutpog", "Bk-2giLoe", "SkeHkD1sg" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028633741, 1489576104749, 1489100599764 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper151/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper151/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"marginal topic, partial results\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The particular topic of high energy physics would likely be of marginal interest to most attendees, though I might be wrong about this. What's more the paper does not present any results improving on current state of the art based on analytical techniques.\\n\\nSince the authors' stated goal is to seek advice and input from the learning community at large, it would seem that they would be better served by attending ICLR and striking up conversations with researchers with relevant experience rather than organising a topical workshop.\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Preliminary work\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"* The authors seem to acknowledge the premature nature of this submission in the last sentence of discussion, and focus on seeking advice. I scored the paper in standard way.\\n\\nThe paper evaluates LSTM vs kalman filter on particle tracking problem in high-energy physics, using simulated data. The paper presents a well-motivated problem, but provides no significant results/novel insights.\", \"pros\": \"-Well-motivated problem\\n-Part of interesting open project: https://heptrkx.github.io/\", \"cons\": \"-Experimental results are not surprising nor conclusive. Imperfect training, insufficient network capacity or expressivity, overfitting, limited data, etc. can easily cause LSTM to underperform best kalman filter with expert knowledge. Experimental descriptions do not show sufficient depths in evaluation.\\n-The motivated problem/solution is not novel. There are a number of prior work that can be cited for nonlinear state estimation with neural network or combining prior with neural net. Structured VAE (Johnson et. al., 2016), deep KF (Krishnan et. al., 2015), for example, explored incorporating structured prior with rich neural network parametrized observation model. Backprop KF (Haarnoja et. al., 2016) combined discriminative training into state estimation and avoided some problems of these generative model papers. \\n-Run-time of different implementations should be detailed with scalability analysis, as that is one of the main motivations.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJNXJgVKg
Similarity preserving compressions of high dimensional sparse data
[ "Raghav Kulkarni", "Rameshwar Pratap" ]
The rise of internet has resulted in an explosion of data consisting of millions of articles, images, songs, and videos. Most of this data is high dimensional and sparse, where the standard compression schemes, such as LSH, become in- efficient due to at least one of the following reasons: 1. Compression length is nearly linear in the dimension and grows inversely with the sparsity 2. Randomness used grows linearly with the product of dimension and compression length. We propose an efficient compression scheme mapping binary vectors into binary vectors and simultaneously preserving Hamming distance and Inner Product. Our schemes avoid all the above mentioned drawbacks for high dimensional sparse data. The length of our compression depends only on the sparsity and is indepenent of the dimension of the data, and our schemes work in the streaming setting as well. We generalize our scheme for real-valued data and obtain compressions for Euclidean distance, Inner Product, and k-way Inner Product.
[ "Theory" ]
https://openreview.net/pdf?id=BJNXJgVKg
https://openreview.net/forum?id=BJNXJgVKg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B1sbDREix", "Syxba3Sjx", "ByvQdKTix" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489458947025, 1489517815704, 1490028574938 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper60/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper60/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Lack of novelty\", \"rating\": \"3: Clear rejection\", \"review\": \"I feel the proposed approach is just a special case of the well-known FJLT, where a sparse +-1 random matrix is used to multiply a signal efficiently while preserving inner products of signal vectors.\\n\\nThe only difference is that in the proposed approach the sampling is without replacement (i.e., one entry can only contribute to one bucket). I don't think it is an important difference. The theoretical results don't show why without replacement sampling matters either.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clearly written but unsure of novelty\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes a technique to compress high-dimensional sparse vectors while preserving Hamming distance and inner products. The approach amounts to multiplying the sparse (say, column) vector by a sparse matrix with mutually orthogonal binary or +/-1-valued rows.\\n\\nThe work reads well and is clearly presented. However, the work fails to mention directly related approaches such as the Sparse JL transform or the Fast JL transform. From my understanding these approaches share most (all?) of the benefits of the proposed approach, so I have concerns about the novelty. At minimum a discussion about the differences/tradeoffs compared to these prior techniques is required.\\n\\nI must say though that I am not very familiar with this area or the mentioned approaches, so it is difficult for me to fully evaluate novelty.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
r1FbV6NYe
A Priori Modeling of Information and Intelligence
[ "Marcus Abundis" ]
This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation.
[ "information", "intelligence", "priori modeling", "workshop", "primitive structural fundaments", "model", "nature", "natural informatics", "task", "general adaptive intelligence" ]
https://openreview.net/pdf?id=r1FbV6NYe
https://openreview.net/forum?id=r1FbV6NYe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "S1qqvq55x", "HyQv2clse", "rytdAkjqx", "HyeHBOYail", "HyJOXJ4sg", "B1a3jcKqg" ], "note_type": [ "comment", "official_review", "official_comment", "comment", "comment", "official_review" ], "note_created": [ 1488787346259, 1489181787005, 1488809585344, 1490028605336, 1489396582735, 1488722869160 ], "note_signatures": [ [ "~Marcus_Abundis1" ], [ "ICLR.cc/2017/workshop/paper108/AnonReviewer3" ], [ "ICLR.cc/2017/workshop/paper108/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Marcus_Abundis1" ], [ "ICLR.cc/2017/workshop/paper108/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Who then, works on *general intelligence*?\", \"comment\": \"Thank you for your comment. As my work stresses the modeling of *general* intelligence, it is necessarily 'broad' in its presentation. Still, I take your note to indicate ICLR does not cover general intelligence, despite obvious 'learning representation' issues. If the reviewer knows of more appropriate venues for submitting work on *general intelligence* I would be truly grateful to hear of them. Thank you for your consideration!\"}", "{\"title\": \"Not a good fit for ICLR & lacking references\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper explores the idea of artificial general intelligence and how it can broadly be achieved.\\n\\nWhile interesting, the contribution seems to be too broad and vague to fit into the ICLR program on representation learning, since ICLR typically involves concrete challenges in and methods for learning representations. It seems that this work would be of greater interest either to cognitive science conferences or AI conferences (e.g. AAAI or IJCAI).\\n\\nPerhaps most importantly, the paper does not include any references to prior work in this area and how the ideas in the paper fit with such existing work. This is crucial for evaluating the usefulness of the ideas and placing the work among existing related literature.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Reply\", \"comment\": \"Representation learning here refers to a fairly specific set of technological issues. More suitable venues would be, depending on how you frame your work, either cognitive science venues like the Cognitive Science Society or applied AI venues like AAAI or IJCAI.\\n\\nThere's another issue here, though. You're not the first (or the ten-thousandth) person to write about issues like these. Unless your work is self-evidently novel and important in a way that's almost never the case, you *need* to situate your work in the context of specific open questions that are being actively studied within the communities that you're submitting your work to. Any academic conference will take a lack of recent citations as a big red flag.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"References Added . . .\", \"comment\": \"With the understanding that the paper remains too broad for ICLR, I have none-the-less added the missing references and a short section that discusses the current literature. Thank you for your consideration!\"}", "{\"title\": \"Way outside the scope of ICLR\", \"rating\": \"2: Strong rejection\", \"review\": \"This paper proposes *a workshop* on information, intelligence, evolution, subjectivity and their relationship. It's coming from an unaffiliated researcher.\\n\\nI don't see any specific proposals that I disagree with, but I think this is straightforwardly inappropriate for ICLR. The paper doesn't discuss any concrete issues involving representation learning in the ICLR sense, and is too broad to be meaningfully evaluated or used by the community.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SyuncaEKx
Adversarial Autoencoders for Novelty Detection
[ "Valentin Leveau", "Alexis Joly" ]
In this paper, we address the problem of novelty detection, \textit{i.e} recognizing at test time if a data item comes from the training data distribution or not. We focus on Adversarial autoencoders (AAE) that have the advantage to explicitly control the distribution of the known data in the feature space. We show that when they are trained in a (semi-)supervised way, they provide consistent novelty detection improvements compared to a classical autoencoder. We further improve their performance by introducing an explicit rejection class in the prior distribution coupled with random input images to the autoencoder.
[ "Deep learning", "Unsupervised Learning", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=SyuncaEKx
https://openreview.net/forum?id=SyuncaEKx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B19SegZje", "Skolqvtig", "H1KCtvFjg", "Hkor7qlil", "BkPHOKaix" ], "note_type": [ "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489203266544, 1489758707402, 1489758672887, 1489179459036, 1490028606884 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper112/AnonReviewer2" ], [ "~Valentin_Leveau2" ], [ "~Valentin_Leveau2" ], [ "ICLR.cc/2017/workshop/paper112/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper uses adversarial auto-encoders for the purposes of novelty detection - detecting outliers that do not belong to the training data distribution. Three criteria are developed for determining novelty: reconstruction error, 1 - probability under the latent prior, and probability of belonging to an explicit rejection class. The idea is interesting, but there are no baselines comparing to other methods in the literature, so it's unclear exactly how good the proposed approach is. What about simply training a generative model like a VAE and evaluating the approximate log-likelihood with some threshold? Or perhaps using the discriminator in a GAN to determine if a test point is real or fake data? I'm sure there are other good baselines in the literature, some of which are cited in the introduction.\\n\\nI would also recommend applying this to other datasets than MNIST, or even a synthetic dataset.\", \"there_is_a_typo_in_the_paragraph_at_the_end_of_section_2\": \"P(f(x) | y(x = 0)) => P(f(x) | y(x) = 0).\\n\\nIn the Gaussian mixture, does C_i refer to component i? I think this doesn't refer to class i in the sense of supervision, but it's not entirely clear.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Answers to Reviewer 2\", \"comment\": \"Thank you very much for your feedback and recommendations. Here are a few comments and answers to them:\\n\\n\\u201cAlthough this is the first occurrence of adversarial auto-encoder used for novelty detection, using auto-encoder based approaches for this application is not novel.\\u201d\\n\\u2192 You are absolutely right. This is precisely the purpose of the paper: showing how the adversarial training can improve the baseline auto-encoder by enforcing a known prior distribution in the latent space. And we did show in this preliminary study that it can potentially improve a lot. \\n\\n\\n\\u201cThe description of the experiments and the model seems clear, except some part like defining the novelty rate when using explicit rejection class.\\u201d\\n\\u2192 You are right. We forgot to give that precision in the paper. The novelty rate is defined according to the proportion of noise images added in the training set. In our experiments, we used as much noisy images as the initial number of images in the training set (60K images). So that, in Equation 3, we used p(y=0)=0.5 \\n\\n\\u201cClaims like \\\"This might be related to the fact that, whatever the used prior distribution, randomly generated images are distributed according to a normal distribution at the center of the feature space (because of the central limit theorem)\\\" needs to be explained if accurate or relevant.\\u201d \\n\\u2192 Let x be the random input image(s) and f_w(x) the activation function of a neuron in the latent space of the auto-encoder. Since the last layer of the encoder is a linear layer, f_w(x) can be re-written as a sum of random variables (the activation values of the previous layer multiplied by a weight). As these random variables are obtained through a deterministic function of the i.i.d. random images x given as input of the network, they are themselves i.i.d. Consequently the central limit theorem applies and the f_w(x)\\u2019s are independently and approximately normally distributed. Now, we agree that the notion of \\u201ccenter of the feature space\\u201d is more discutable. A way to see it is to consider that the set of the real images of the training set is a specific sampling of the random images distribution. In that case, the mean of the f_w(x)\\u2019s for the real images of the training set can be considered as an estimator of the mean of the normal distribution. Consequently, the mean of the normal distribution is approximately equal to the mean of the prior distribution (what we call the center of the feature space). Note that, empirically, if you plot the features of the random images in the 2D latent space, you observe that phenomenon. \\nSince, we don\\u2019t have enough place to discuss that point in the paper, we suggest simply removing the sentence and keep this for a further work. \\n\\n\\u201cThe experimental procedure does not compare to simple baseline like mixture of Gaussians.\\u201d\\n\\u2192 Our goal was to show how the adversarial training can improve the baseline auto-encoder by enforcing a known prior distribution in the latent space. For a full paper submission (and not a 3 page workshop paper), we would surely have explored other baselines (e.g. GMM but also GAN, VAE, DAE, CAE, etc.). Our opinion is that a workshop track is well adapted to host preliminary studies contrary to conference tracks for which one can be more exigent in terms of experimental load. \\n\\n\\u201cMoreover, apart from visualization purpose, restricting the models to a 2D latent space is not well justified for the purpose of novelty detection.\\u201d\\n\\u2192 Sure. Our goal was to gain knowledge on the contribution of adversarial learning over a baseline autoencoder, not to win the performance race.\\n\\n\\u201cUsing accuracy as performance is not fully informative and the choice of thresholding remains arbitrary. Using confusion matrices and precision-recall curves might help understand more what is going on\\u201d\\n\\u2192 Actually, we did not use accuracy but Mean Average Precision (as explained in section \\u201cProtocol and settings\\u201d). The term \\u201caccuracy\\u201d does even not appear in the paper. Mean Average Precision does not involve any thresholding and is among the most fully informative metric summarizing the precision-recall curve. Investigating other metrics or plots would not be possible in a 3 pages paper.\"}", "{\"title\": \"Answers to Reviewer 1\", \"comment\": \"Thank you very much for your feedback and recommendations. Here are a few comments and answers to them:\\n\\n\\\"The idea is interesting, but there are no baselines comparing to other methods in the literature, so it's unclear exactly how good the proposed approach is.\\\"\\n\\u2192 Actually, there is a baseline: the reconstruction error of the (non-adversarial) auto-encoder. The objective of the paper was precisely to show how the adversarial training can improve this baseline by enforcing a known prior distribution in the latent space. \\n\\n\\\"What about simply training a generative model like a VAE and evaluating the approximate log-likelihood with some threshold?\\\"\\n\\u2192 We focused on adversarial auto-encoders because it has been shown in the paper of Makhzani et al. that they outperform VAE in the context of semi-supervised learning. We agree that it would be relevant to also make that comparison in the specific case of novelty detection. We will do that in the next few weeks.\\n\\n\\u201cOr perhaps using the discriminator in a GAN to determine if a test point is real or fake data?\\u201d\\n\\u2192 Actually, this was the first thing we tested before switching to variational auto-encoders (because it was not working well). After convergence, the discriminator of a GAN is not able to determine if a test point is real or fake data because the distribution of the fake data converges to the one of the real data. Thus, the discriminator strongly overfits and is near random on novelty detection. Using intermediate versions of the discriminator could be an option that we also experimented but that was theoretically and experimentally not convincing. \\n\\n\\u201cI'm sure there are other good baselines in the literature, some of which are cited in the introduction.\\u201d \\n\\u2192 For a full paper submission and not a 3 page workshop paper, we would surely have explored such other baselines.\"}", "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes to use Adversarial Auto Encoders in the context of anomaly/novelty detection. They explore the use of different priors, semi-supervised learning and using an anomaly class.\\n\\nAlthough this is the first occurrence of adversarial auto-encoder used for novelty detection, using auto-encoder based approaches for this application is not novel. \\nThe description of the experiments and the model seems clear, except some part like defining the novelty rate when using explicit rejection class. Claims like \\\"This might be related to the fact that, whatever the used prior distribution, randomly generated images are distributed according to a normal distribution at the center of the feature space (because of the central limit theorem)\\\" needs to be explained if accurate or relevant. \\nThe experimental procedure does not compare to simple baseline like mixture of Gaussians. Moreover, apart from visualization purpose, restricting the models to a 2D latent space is not well justified for the purpose of novelty detection. Using confusion matrices and precision-recall curves might help understand more what is going on. \\n\\nAlthough the use of adversarial auto-encoder for anomaly detection might be worth exploring, the experimental procedure needs to be more rigorous in order to draw any conclusion.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
Hkx-gCfYl
Coupling Distributed and Symbolic Execution for Natural Language Queries
[ "Lili Mou", "Zhengdong Lu", "Hang Li", "Zhi Jin" ]
In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance.
[ "symbolic execution", "natural language queries", "neural execution", "table", "natural languages", "differentiability", "neural networks", "transfers", "imperfect", "knowledge" ]
https://openreview.net/pdf?id=Hkx-gCfYl
https://openreview.net/forum?id=Hkx-gCfYl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJEEAY-je", "BJ9QtLgoe", "BJwW6_Wog", "Hyaf_Fpjg", "HJPhaKbjg" ], "note_type": [ "comment", "official_review", "official_review", "comment", "comment" ], "note_created": [ 1489243691859, 1489164578056, 1489239294805, 1490028564588, 1489243566566 ], "note_signatures": [ [ "~Lili_Mou1" ], [ "ICLR.cc/2017/workshop/paper37/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper37/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Lili_Mou1" ] ], "structured_content_str": [ "{\"title\": \"Thanks. Related work added.\", \"comment\": \"Thank you.\\n\\nSpecial thanks to the recommendation of the paper (https://arxiv.org/pdf/1511.04586.pdf), where the authors train neural attention with IBM Model 4. Our main idea works in an opposite way: we first make use of fully differentiable neural networks to learn meaningful (although imperfect) intermediate execution steps, and then guide an external symbolic system, which is more natural in our semantic parsing scenario.\\n\\nWe revised the paper with discussion at the end of Section 1. Due to page limitation, we had included more discussion in our extended version (Section 4 in https://arxiv.org/pdf/1612.02741.pdf); the suggested paper will also be discussed next time we update the arXiv version (i.e., in mini-batch fashion).\"}", "{\"title\": \"official review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes to combine the distributed and symbolic execution for natural language queries. Based on the finding that the symbolic executor's column selection generally aligns with the field attention of the distributed enquirer, the authors incorporate the symbolic executor to the loss of the distributed enquirer by augmenting a field attention cross entropy loss into the original loss. This information is also used in pre-train the policy for the REINFORCE algorithm. The experiments show by combining the distributed and symbolic execution this way, the model achieve better performance.\\n\\nI like the idea of incorporating the symbolic executor model into the neural model via attention. Similar ideas have been proposed in other papers too (for example https://arxiv.org/pdf/1511.04586.pdf -- section 2.6) It would be nice if the authors can refer more to the related works.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"official review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"Summary: the paper proposes to combine a distributed enquirer with a symbolic executor for the task of question answering. The idea is simple: the distributed enquirer is used for the policy initialization for the symbolic executer which is trained using REINFORCE. The proposed method outperforms the baseline SEMPRE on a QA dataset.\", \"Discussion:\", \"The paper is quite difficult to read, not because of the idea is complicated. Several details (e.g. math symbols) about the distributed enquirer can be safely omitted. Figure 1 hardly helps. It seems like the authors tried to shorten a long paper by \\\"copy and paste\\\".\", \"The experimental results are impressive. However, why didn't the authors choose Yin et al (2016b) as the baseline? Table 2e is unclear: are the results on the dev or test set? If they are on the dev set, I was surprised to see that the performance on the test set is even substantially higher than the dev set. If they are on the test set, I then have no idea why the accuracy 96.5 is not on table 2a.\", \"TL,DR; I think the idea and the experimental results are good enough, but the paper must be rewritten in a clearer way.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Thanks. Paper Revised\", \"comment\": \"We thank the reviewer for constructive comments.\\n\\n- Equations (1) and (2) are highlighted to better demonstrate how the neural and symbolic worlds can be coupled. We have now saved some space and clarified the points raised by the reviewer.\\n\\nWe retained most experimental results in the paper because we still hope our 3-page workshop submission can be as interesting as possible. Our extended version could be found at: https://arxiv.org/pdf/1612.02741.pdf\\n\\n\\n- We did use Yin et al (2016b) as our baseline. And Tables 2a and 2e are test performance. \\n\\n96.5% in Table 2e is not included in Table 2a because 96.5% is achieved by our proposed coupling approach (after one-round co-training), whereas Table 2a's Distributed and Symbolic columns refer to either single of the world. \\n\\nBesides, the 96.4% performance is obtained with step-by-step supervision; therefore it's also not included in Table 2a (where only denotations are used for supervision). We have clarified these in the revised paper.\\n\\nThanks again for the review. We're also happy to further clarify and improve our paper should there be any problem.\"}" ] }
BJBkkaNYe
Training a Subsampling Mechanism in Expectation
[ "Colin Raffel", "Dieterich Lawson" ]
We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings.
[ "Theory", "Deep learning", "Structured prediction" ]
https://openreview.net/pdf?id=BJBkkaNYe
https://openreview.net/forum?id=BJBkkaNYe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJikI5lsx", "HycScE4ie", "ByMpgjljl", "rJke6qlil", "HJXHdKpsx", "Hk4ip5esl" ], "note_type": [ "official_review", "official_comment", "comment", "comment", "comment", "official_review" ], "note_created": [ 1489180131221, 1489418818256, 1489182906462, 1489181927459, 1490028603003, 1489182107807 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper105/AnonReviewer3" ], [ "ICLR.cc/2017/workshop/paper105/AnonReviewer3" ], [ "~Colin_Raffel1" ], [ "~Colin_Raffel1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper105/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Reasonable start, but not there yet\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper seeks to build a neural network component for subsampling data. The idea is to build a layer that takes as input a discrete sequence s_1, ..., s_T along with a probability e_t of keeping each element t of the sequence. The layer is meant to independently choose whether or not to keep each element s_t based on the probability e_t, and then to assemble the kept inputs into a subsequence. Rather than execute the layer by sampling, the paper proposes to instead compute a marginal distribution over the outputs under this model. It proposes a dynamic program that runs in O(T^3) time and evaluate the method on a simple toy problem.\\n\\nWhile the big idea seems reasonable and the paper is written clearly, I don't think it's developed enough to warrant publication at the workshop at this point. The main issues are as follows:\\n\\n- I'm not convinced the algorithm is optimal:\\n-- I'm not convinced the O(T^3) cost is necessary. I would think that a matrix of $output position$ x $input symbol$ could be computed in O(T^2) time using dynamic programming, and then after having computed this, the expected output could be computed in O(T^2) time by summing over the $input symbol$ dimension. Am I missing something?\\n-- The algorithm should be implemented in a numerically stable way using log-sum-exps.\\n\\n- The experiment is very simple and there are no baselines.\\n\\n- One motivation for subsampling is to shorten a sequence. However, under the marginalization approach, the sequence doesn't actually get shortened. Please discuss this.\", \"typos\": \"\\\"extented\\\"\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Ok, increasing score by 1\", \"comment\": \"Thanks for the reply. I'll bump up my score by 1.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your detailed review! To address your comments individually:\\n\\n> The authors propose a dynamic programming algorithm with the computational complexity of O(T^3) and provide some results on toy task.\\n\\nIn fact, the dynamic program is O(T^2) - this was an error in the originally-posted version of the manuscript, which has been recently fixed. Sorry that it was not updated early enough for you to see this change.\\n\\n> The writing of the paper needs some work and the ideas need to be presented more clearly, though I understand that the space limitation makes it difficult to explain everything coherently. Especially, the notation is vague.\\n\\nThanks for this feedback. I will add the changes you suggested, and also try to clean up the notation. I think with a bit more additional space the exposition could be made better.\\n\\n> Actually, we have tried a very similar idea for character-level neural machine translation a few years ago in order to learn hierarchical alignments, but we never managed to make it work...\\n\\nVery cool that you were trying a similar idea! Would be interested in discussing further. When applying this approach and related ideas, we also experienced non-monotonic alignments and vanishing gradients. We have some current ongoing work for mitigating both issues.\\n\\n> In the first page, you say \\u201cU \\\\le T\\u201d, but neither U nor T has not defined anywhere in the document.\\n\\nThey are listed in the definitions of the input and output sequence, as s = {s0, s1, . . . , sT \\u22121}, y = {y0, y1, . . . , yU\\u22121}, but you are right that this could be made clearer; we will address this.\\n\\n> Please present a more precise probabilistic for e_t. You just say p(y_0=s_0) = e_0, but a more general formal definition would be useful.\\n\\nGood idea. The basic idea is that they e_t is the \\\"probability of including element s_t in the output sequence\\\". In practice, they are computed as a function of the network states. We will be sure this information is clear in the paper.\\n\\n> A figure or the visualization of the automata/algorithm that generates the task would be useful(perhaps in the appendix).\\n\\nWe do in fact have such a diagram. We will add it in an appendix.\\n\\n> This is more of a curious empirical question, but can this algorithm generalize to the sequences longer than the ones that it has been trained on?\\n\\nIn practice, we found that it was able to generalize. In particular, because of the curriculum learning strategy we employed, we found that it was able to learn the correct algorithm on short examples and apply them to longer examples.\\n\\n\\nThank you again for your comments. We hope we have addressed your concerns.\"}", "{\"title\": \"Responses\", \"comment\": \"Hi, thanks for your thorough review!\\n\\nTo immediately address your first concern, I believe you reviewed the initially posted version of this manuscript, not the most recent one - we recently updated it to reflect the error in stating that the dynamic program had cubic complexity. We apologize for not having the corrected version online early enough for you to consider it.\\n\\nTo address your second concern, this is indeed how it is implemented - you can see here in the example code posted along with the paper:\", \"http\": \"//nbviewer.jupyter.org/github/craffel/subsampling_in_expectation/blob/master/Subsampling%20in%20Expectation.ipynb#TensorFlow-example\\nIf you think it's appropriate, we can include this information in the manuscript.\\n\\nIn terms of the experiment, we appreciate the criticism that it is overly simple and there are no baselines. The purpose of this abstract was solely to propose the approach and show a proof-of-concept that it works; unfortunately, there was not sufficient space for further experiments.\\n\\nFinally, while as you suggest marginalization does not actually shorten the sequence as you say, it does have the effect of placing sequence elements closer together in the output sequence. For example, if the input sequence was\\n[a, b, c, d, e]\\nand the subsampling probabilities were\\n[1, 0, 0, 1, 0]\\nthen the expected output would be\\n[a, d, 0, 0, 0]\\nAs you say, this sequence is not shorter, but if there is an important dependency between a and d and the remaining symbols are distractors, the resulting timelag between them has been made substantially shorter. We tried to mention this effect in the bullet points at the beginning of the abstract, but we can try to make it clearer in later versions.\\n\\nThank you again for the review, we hope we have addressed your concerns!\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Interesting and plausible idea but needs more work.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Subsampling In Expectation\", \"summary\": \"This paper proposes a way to sample a shorter sequence y=(y_0, y_1, ..., y_t) from the input sequence x=(x_0, ..., x_k) according to the probabilities e=(e_0, ..., e_k). The authors propose a dynamic programming algorithm with the computational complexity of O(T^3) and provide some results on toy task.\", \"a_general_comment\": \"The writing of the paper needs some work and the ideas need to be presented more clearly, though I understand that the space limitation makes it difficult to explain everything coherently. Especially, the notation is vague. However, the idea makes sense and it is correct in principal. Actually, we have tried a very similar idea for character-level neural machine translation a few years ago in order to learn hierarchical alignments, but we never managed to make it work. One main limitation we observed at the time for char-level NMT that, this kind of algorithm can only generate monotonic alignments, and for the language pairs such as, Ch-En or Tr-En where the alignments can be highly non-monotonic, we could not observe much improvements and also the vanishing gradients arising from the products and the sigmoids were crippling the training. Efficiency was also another issue for us. But authors of this paper shows that in principle this idea works in the toy cases, I guess the challenge remains to find the right architecture and a way to scale the algorithm to right tasks.\", \"more_detailed_comments\": \"In the first page, you say \\u201cU \\\\le T\\u201d, but neither U nor T has not defined anywhere in the document.\\nPlease present a more precise probabilistic for e_t. You just say p(y_0=s_0) = e_0, but a more general formal definition would be useful.\\nA figure or the visualization of the automata/algorithm that generates the task would be useful(perhaps in the appendix).\\nThis is more of a curious empirical question, but can this algorithm generalize to the sequences longer than the ones that it has been trained on?\\n\\nConclusion,\", \"pros\": [\"A simple algorithm to subsample the sequences.\", \"Interesting results on a toy task.\"], \"cons\": [\"The proposed algorithm is O(T^3) which is quite difficult to scale for the long sequences and realistic tasks.\", \"The experiments are not convincing enough.\", \"The writing is not clear enough and needs some more work(this is a minor cons).\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkB_5hEKe
Classless Association using Neural Networks
[ "Federico Raue", "Sebastian Palacio", "Andreas Dengel", "Marcus Liwicki" ]
In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms.
[ "model", "mlp", "classless association", "unknown class", "neural networks", "instances", "scenario", "symbol grounding problem", "association learning" ]
https://openreview.net/pdf?id=rkB_5hEKe
https://openreview.net/forum?id=rkB_5hEKe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SyDke2lje", "HkD59HVsl", "S1fSuKTsg", "HJTkxXNsl", "SJsnwCMjl" ], "note_type": [ "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489186783013, 1489422990917, 1490028602119, 1489412069411, 1489328050953 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper104/AnonReviewer1" ], [ "~Federico_Raue1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper104/AnonReviewer2" ], [ "~Federico_Raue1" ] ], "structured_content_str": [ "{\"title\": \"Classless association using neural networks\", \"rating\": \"3: Clear rejection\", \"review\": \"I can honestly say that despite several readings, I have no idea what this paper is actually about. I believe the problem is relating two objects, despite not having a label that classifies the two objects as being of the same class. From there, my comprehension goes downhill: EM algorithm mixed with pseudo-classes and a weighting scheme. Networks using the output from another network as the targets of other networks. Target uniform statistical distributions. Why a weighting scheme? What's going on?\\n\\nI acknowledge that perhaps the workshop format is too small, and therefor limits too severely the required space to explain an idea. Perhaps. But I can safely say that almost nobody will glean any insight from this manuscript in the time that a reasonable person is willing to give a manuscript. I would say that if the authors are confident of this work, they should write up a longer manuscript (or return to a longer one) that takes the time and space necessary to more effectively motivate the problem, and introduce the parts of the architecture, again with motivation, so that the reader has a chance of understanding the manuscript.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"RE: classless association using Neural Networks\", \"comment\": \"Thank you for your time. Hopefully, our responses have addressed your concerns\\n>>> This is what I understand: let's assume a young child is playing with toys from 2 different brands. The toys include several pieces of different types (10 MNIST classes). \\n>>>The aim is to learn to put the same brand types into same buckets. We want a bucket to have the same time of toy of the same brand (purity, all block type t of brand j is in \\n>>>bucket b) also the object types are the same for 2 brands in the bucket (association, all block type t of both brands is in bucket b). The ultimate goal (future work) is to \\n>>>learn association between diffferent streams (e.g. what parents say when the child holds a lego).\\nThe analogy is correct.\\n\\n>>>This work models this problem with a MLP to first induce a feature vector z^{1,2} for 2 streams. A pseudo-class \\\\hat{z^{1,2}} is predicted using these feature vectors. In the M-step the parameters are updated so that the distribution defined by \\\\hat{z^{1,2}} matches the target distribution \\\\phi.\\nWe want to point out that the model has two MLPs \\n\\n\\n>>>two issues I observed:\\n>>>>1) they do not provide any information about how they evaluated other clustering algorithms. If they are fed with raw pixels, I don't think the comparison would be fair \\n>>>because there is no featurization of raw fixels where the proposed model have this power. Comparison on a single layer MLP autoencoder's hidden features or output of PCA\\n>>>would be more fair.\\n\\nThe reported resuls of both clustering algorithms is based on raw pixels. We have evaluated the same datasets using pca (64, 128, 256), and the results are quite similar to Table 1. Moreover, these results are similar to Jenckel, et al, where they did not find any improve between raw pixels vs pca for character recognition in Historical documents.\\n1) MNIST input 1, input 2\\n* pca - 64: 64.1 (std:1.8), 63.9 (std:3.2) \\n* pca - 128: 63.5 (std:2.3), 63.6 (std:2.1)\\n* pca - 256: 63.6 (std:2.4), 63.4 (std:3.3)\\n2) Rotated MNIST input 1, input 2\\n* pca - 64: 63.9 (std:2.2), 63.3 (std:3.2) \\n* pca - 128: 63.7 (std:3.8), 61.6 (std:2.8)\\n* pca - 256: 65.1 (std:2.4), 63.9 (std:1.6)\\n3) Inverted MNIST input 1, input 2\\n* pca - 64: 64.9 (std:2.8), 64.1 (std:3.3) \\n* pca - 128: 64.6 (std:2.0), 64.2 (std:3.3)\\n* pca - 256: 65.1 (std:1.7), 63.5 (std:2.8)\\n4) Random Rotated MNIST input 1, input 2\\n* pca - 64: 64.4 (std:1.7), 14.9 (std:0.4) \\n* pca - 128: 63.9 (std:1.9), 14.8 (std:0.3)\\n* pca - 256: 65.5 (std:2.2), 14.9 (std:0.5)\\n\\n[1] Jenckel, et al (2016). Clustering Benchmark for Characters in Historical Documents. Workshop on Document Analysis Systems, DAS16. \\n\\n>>2) The experiments are almost oracle type. The model knows the number of classes and the target distribution. I am not sure if other clustering algorithms make use of target \\n>>distribution information. In a real life scenario, none of these assumptions are true. An early attempt in that direction would make this work acceptable for workshop \\n>>publication.\\nWe agree that the experiments are the ideal case, where the number of classes and the statistical distribution is known. However, our model can be extended where the task is not constrained to the number of classes (which are defined by the language-linguistic). For example, the classes in MNIST (one, two, three, ... zero) constraint that the input sample\\ncan only be in those ten buckets (supervised tasks). In contrast, our association task inspired by the symbol grounding problem is not constrained to the number of classes because we are only interested in learning two elements are the same based on their correlation. \\nWith this in mind, our model only requires changing the size of the vectors z^{1}, z^{2}, and \\\\phi for learning the association.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"classless association using neural networks\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"I agree with R1, the workshop format is too small to efficiently describe an idea.\", \"this_is_what_i_understand\": \"let's assume a young child is playing with toys from 2 different brands. The toys include several pieces of different types (10 MNIST classes). The aim is to learn to put the same brand types into same buckets. We want a bucket to have the same time of toy of the same brand (purity, all block type t of brand j is in bucket b) also the object types are the same for 2 brands in the bucket (association, all block type t of both brands is in bucket b). The ultimate goal (future work) is to learn association between diffferent streams (e.g. what parents say when the child holds a lego).\\n\\nThis work models this problem with a MLP to first induce a feature vector z^{1,2} for 2 streams. A pseudo-class \\\\hat{z^{1,2}} is predicted using these feature vectors. In the M-step the parameters are updated so that the distribution defined by \\\\hat{z^{1,2}} matches the target distribution \\\\phi.\", \"two_issues_i_observed\": \"1) they do not provide any information about how they evaluated other clustering algorithms. If they are fed with raw pixels, I don't think the comparison would be fair because there is no featurization of raw fixels where the proposed model have this power. Comparison on a single layer MLP autoencoder's hidden features or output of PCA would be more fair.\\n2) The experiments are almost oracle type. The model knows the number of classes and the target distribution. I am not sure if other clustering algorithms make use of target distribution information. In a real life scenario, none of these assumptions are true. An early attempt in that direction would make this work acceptable for workshop publication.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"RE: Classless association using neural networks\", \"comment\": [\"We thank the reviewer for the time. Unfortunately, given the strict limit of 3 pages, it is challenging to give more information about the motivation and the elements of our model. Hopefully, our responses have addressed your concerns.\", \"The presented task is to learn the association between two disjoint input streams where both streams represent the same unknown class. This task is motivated by the Symbol Grounding Problem, which is the binding of abstract concepts with the real world via sensory input, such as visual system. More formally, our task is defined by two disjoint input streams x^(1) and x^(2) that represent the same unlabeled class. The goal is to learn the association by classifying both with the same pseudo-class c^(1) = c^(2).\", \"Our training rule relies on matching a statistical distribution and a mini-batch of output vectors of MLPs as an alternative loss function that does not require classes. With this in mind, we have introduced a new learning parameter (weighting vectors) that modifies the raw output vectors (z) based on the statistical constraint (\\\\phi). In addition, the weighting vectors help to classify the input samples. As a result, the pseudo-classes -obtained in the classification step in Equation 4- change during training and similar elements are grouped together (Figure 1, 2, and 3).\", \"Motivated by the association learning between both streams. We have proposed to use the pseudo-classes of one network as a target of the other network, and vice versa. It can be seen in Figure 1, 2 and 3, each row in the first and second columns (MLP^(1) and MLP^(2)) represents a pseudo class (index) between 0-9. After the model is trained, both networks agree on classifying similar input samples (or digits) with the same index.\", \"In summary, the two previous components are used in an EM-approach.\", \"Initial step: all input samples x(1) and x(2) have random pseudo-classes c(1) and c(2), where histogram of pseudo-classes is similar to the desired statistical distribution\", \"E-step classifies the output vectors based on the weighting vectors (Equation 4) and approximates the current statistical distribution of the mini-batch (Equation 3). Note that the pseudo-classes are assigned to the samples after a number of iterations. In other words, the updated of the pseudo-classes is not online.\", \"M-step updates the weighting vectors (\\\\gamma^(1), \\\\gamma^(2)) and the parameters of the networks (\\\\theta^(1),\\\\theta^(2))\", \"We have updated our paper in order to clarify more the model and still keeping the page limit.\"]}" ] }
r1bMV7Ntg
Episode-Based Active Learning with Bayesian Neural Networks
[ "Feras Dayoub", "Niko Suenderhauf", "Peter Corke" ]
We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor.
[ "Computer vision", "Deep learning" ]
https://openreview.net/pdf?id=r1bMV7Ntg
https://openreview.net/forum?id=r1bMV7Ntg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJ3muFasx", "BkW4eYfsg", "ByiUJ81se", "rkxCf4gog" ], "note_type": [ "comment", "comment", "official_review", "official_review" ], "note_created": [ 1490028580064, 1489305640949, 1489096531249, 1489154760415 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "~Niko_Suenderhauf1" ], [ "ICLR.cc/2017/workshop/paper69/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper69/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you for your constructive feedback.\\n\\nWe added a citation to Gal et al. 2017 wich appeared on arxiv after we submitted our paper. Notice that before we cited their NIPS 2016 workshop contribution, which was a poster that essentially covers the contents of their new arxiv submission. Gal et al. 2017 showed that the max entropy acquisition function yields results comparable to more complex acquisition functions. For that reason (and since the Gal et al. NIPS workshop poster was known to us), we selected the max entropy function for the experiments conducted in this paper.\", \"we_furthermore_updated_the_submission_to_include_the_random_selection_baseline_as_requested\": \"Fig. 1 (right) now shows the performance of a network trained on 74% randomly selected images from the training dataset as a baseline. We show the averaged performance from 10 independent runs (as for all other experiments).\\n\\nWe hope these revisions make the paper a more valuable contribution.\"}", "{\"title\": \"Limited novelty and significance\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper evaluates Bayesian Neural Networks for active learning on CIFAR10 dataset. It investigates incremental vs full-batch training for network updates. It uses more complex dataset (CIFAR10 over MNIST) than a prior work that did comparison only on MNIST (Gal et. al., 2016).\", \"pros\": \"-Simple and clear presentation\\n-It presents episode-based active learning setting, which is closer to application scenarios in robotics\", \"cons\": \"-It ignores comparing against different acquisition functions and classifiers, which are important for evaluating good active learning techniques, and instead only compares simple and heuristic ways to pick incremental or full data to train on\\n-Improvements are small. In addition, it\\u2019s reasonable to show accuracy on 70% randomly selected data, etc. \\n-It has limited novelty, since Gal et. al., 2016 already applied BNN for active learning. They also have updated paper with new results (https://arxiv.org/abs/1703.02910)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The submission investigates episodic active learning - meaning the additional samples arrive sequentially in batches and groundtruth labels can be acquired from an oracle in an active learning fashion.\\n\\nmore specifically, the submission investigates bayesian neural networks. the choice is not well motivated and a bit unclear. no comparison is provided.\\n\\nDifferent strategies for step-wise training and label acquisition are evaluated.\\nThe evaluation is diffuse as there are multiple competing goals, e.g.:\\n- queries to the oracle\\n- accuracy of the final model\\n- accuracy of intermediate models\\n- computation time (?)\\n\\nfor the first three plots are shown - but the results are presented in a way that makes the strategies difficult to compare.\", \"main_points_criticism\": [\"it is strange that the fully supervised case performs slightly worse than two of the incremental approaches. it might be noise or there might be a problem with the supervised baseline. there is no satisfying explanation to this observation in the submission\", \"the submission seems too much out of context. no direct related work is cited to the problem; the theme of sequentially retrieving labels is common to most active learning and experimental design papers; performance is typically plotted over the number of samples.\", \"no baselines are computed - only the 5 strategies the authors came up with\", \"no prior strategies were drawn from the related work\", \"the authors make a point in the conclusion that their setting doesn't allow to re-observe samples. but from reading the setup, it seems this is only true for the active learning scheme that is only allowed to pick from the current batch/pool (training is still performed on all selected samples by some strategies). but if this is an important point - the submission fails to highlight it's importance in the experiments. a baseline should be shown - where the active learning is picking from all the whole set. but the suspicion remains that there is not too much of a difference. therefore i would be important to show those numbers.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1IvyjVYl
Fast Adaptation in Generative Models with Generative Matching Networks
[ "Sergey Bartunov", "Dmitry P. Vetrov" ]
We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided.
[ "Deep learning", "Unsupervised Learning" ]
https://openreview.net/pdf?id=r1IvyjVYl
https://openreview.net/forum?id=r1IvyjVYl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SkkLueGjl", "rJs4dYaix", "Hk1sOgzjg", "BJD37aesg", "HkO5a9lie" ], "note_type": [ "comment", "comment", "comment", "official_review", "official_review" ], "note_created": [ 1489270855051, 1490028595114, 1489270935347, 1489191855333, 1489182096436 ], "note_signatures": [ [ "~Sergey_Bartunov1" ], [ "ICLR.cc/2017/pcs" ], [ "~Sergey_Bartunov1" ], [ "ICLR.cc/2017/workshop/paper95/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper95/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Reply\", \"comment\": \"Thank you for the review.\\n\\nWe have uploaded a new version of the manuscript with a refined figure 1, which should make the generative process more clear.\\n\\n> Does it mean the basic version uses standard normal to be the prior and further the model employs an inference network?\\nThat's correct.\\nAlso note that similarly to the generative model, our inference network is conditional, i.e. has the form of q(z | x, X). \\n\\n> But in order to argue that the latent variable z brings the stochasticity and generalization ability, the current experiments are not sufficient enough and lack of baseline models. \\nOur baselines are the standard VAE and the conditional generative model that resembles in it's structure the neural statistician model (which is another ICLR submission). It is unclear how to estimate the predictive log-likelihood in the original neural statistician model, hence we had to make an adaptation which is more tractable and still allows to make the point of usefulness of the proposed matching procedure.\\nTo the best of our knowledge, there are no other variants of conditional VAEs that would be relevant for the comparison in a similar setting (fast generalization from multi-class and multi-object data). \\n\\n> Even the comparison with VAE only shows marginal improvement.\\nWhen not conditioned on any additional data, our model can not indeed perform significantly better than the VAE, because for both models the architecture and the amount of information available are the same. \\nHowever, we show that generative matching networks can perform much better in terms of predictive log-likelihood as we provide more conditioning objects.\\nIn fact, nearly same performance in the unconditioned regime is already a good result, since the proposed model can be safely used in the absence of new data.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your review.\\n\\nWe have uploaded a new version of the paper where we changed a figure 1 to hopefully make the generative process and the notation used more clear.\"}", "{\"title\": \"Interesting work\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper reports on a conditional VAE that generates samples similar to few samples it is conditioned upon.\\nThe conditioning samples in this work are taken from few different classes. It is shown empirically that a vector summary of the conditioning dataset that simply averages representations of the individual samples doesn't encode the information well. Instead the authors propose to aggregate that information similarly to the method used in matching networks. This method seems to be working better.\\nOverall it's an interesting idea. The model description could be clearer however.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"interesting but needs further work to improve\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes a interesting conditional generative model by using generative matching networks so that the model is able to carry out one-shot or few-shot learning.\\nHowever, the notations are very confusing and lack of clarity.\\nFigure 1 shows a neural structure but there seems no corresponding text explaining it.\\nThe authors introduce the model by telling the story from basic version, but I couldn\\u2019t follow the further modification.\\nDoes it mean the basic version uses standard normal to be the prior and further the model employs an inference network?\\nI can understand the authors attempt to train a conditional generative distribution to produce data with an intermediate latent variable. \\nBut in order to argue that the latent variable z brings the stochasticity and generalization ability, the current experiments are not sufficient enough and lack of baseline models. \\nEven the comparison with VAE only shows marginal improvement.\\nI think this paper needs a bit more work on the design of the paper presentations and experiment.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Hk6dkJQFx
Revisiting Batch Normalization For Practical Domain Adaptation
[ "Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou" ]
Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
[ "dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection" ]
https://openreview.net/pdf?id=Hk6dkJQFx
https://openreview.net/forum?id=Hk6dkJQFx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "r1U87aQje", "HkKhiPysx", "SJZ-nJDie", "Hy6MuY6se" ], "note_type": [ "official_review", "official_review", "comment", "comment" ], "note_created": [ 1489388366007, 1489103792909, 1489595385123, 1490028565370 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper38/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper38/AnonReviewer1" ], [ "~Yanghao_Li1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"I previously reviewed the paper and am attaching my review below.\\n\\nI still have concern regarding the incorrect arguments raised in the paper (such as the nature of matching the source/target domain distributions), and it seems that the authors simply reorganized the sections down to the appendix section. As a result I am keeping my original recommendation, but will not argue if the AC decides to accept the paper.\\n\\n*** Original review ***\\nOverall I think this is an interesting paper which shows empirical performance improvement over baselines. However, my main concern with the paper is regarding its technical depth, as the gist of the paper can be summarized as the following: instead of keeping the batch norm mean and bias estimation over the whole model, estimate them on a per-domain basis. I am not sure if this is novel, as this is a natural extension of the original batch normalization paper. Overall I think this paper is more fit as a short workshop presentation rather than a full conference paper.\", \"detailed_comments\": \"Section 3.1: I respectfully disagree that the core idea of BN is to align the distribution of training data. It does this as a side effect, but the major purpose of BN is to properly control the scale of the gradient so we can train very deep models without the problem of vanishing gradients. It is plausible that intermediate features from different datasets naturally show as different groups in a t-SNE embedding. This is not the particular feature of batch normalization: visualizing a set of intermediate features with AlexNet and one gets the same results. So the premise in section 3.1 is not accurate.\\n\\nSection 3.3: I have the same concern as the other reviewer. It seems to be quite detatched from the general idea of AdaBN. Equation 2 presents an obvious argument that the combined BN-fully_connected layer forms a linear transform, which is true in the original BN case and in this case as well. I do not think it adds much theoretical depth to the paper. (In general the novelty of this paper seems low)\", \"experiments\": [\"section 4.3.1 is not an accurate measure of the \\\"effectiveness\\\" of the proposed method, but a verification of a simple fact: previously, we normalize the source domain features into a Gaussian distribution. the proposed method is explicitly normalizing the target domain features into the same Gaussian distribution as well. So, it is obvious that the KL divergence between these two distributions are closer - in fact, one is *explicitly* making them close. However, this does not directly correlate to the effectiveness of the final classification performance.\", \"section 4.3.2: the sensitivity analysis is a very interesting read, as it suggests that only a very few number of images are needed to account for the domain shift in the AdaBN parameter estimation. This seems to suggest that a single \\\"whitening\\\" operation is already good enough to offset the domain bias (in both cases shown, a single batch is sufficient to recover about 80% of the performance gain, although I cannot get data for even smaller number of examples from the figure). It would thus be useful to have a comparison between these approaches, and also a detailed analysis of the effect from each layer of the model - the current analysis seems a bit thin.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"simple but effective domain adaptation approach\", \"rating\": \"7: Good paper, accept\", \"review\": \"(I previously reviewed this paper for the main conference. I will copy most of my comments here, removing criticisms that have since been addressed.)\", \"pros\": \"The method is very simple and easy to understand and apply.\\n\\nThe experiments demonstrate that the method compares favorably with existing methods on standard domain adaptation tasks.\\n\\nThe analysis in section 5.3.3 shows that only a small number of target domain samples are needed for adaptation of the network.\\n\\nGood results for remote sensing domain adaptation included in appendix.\", \"cons\": \"There is little novelty -- the method is arguably too simple to be called a \\u201cmethod.\\u201d Rather, it\\u2019s the most straightforward/intuitive approach when using a network with batch normalization for domain adaptation. (The alternative -- using the BN statistics from the source domain for target domain examples -- is less natural, to me.)\\n\\n\\nOverall, there\\u2019s not much novelty here, but the paper includes sufficient experimentation and interesting analysis, and it\\u2019s hard to argue that simplicity is a bad thing when the method is clearly competitive with or outperforming prior work on the standard benchmarks (in a domain adaptation tradition that started with \\u201cFrustratingly Easy Domain Adaptation\\u201d).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thanks a lot for your comments and suggestions of our work. Actually, we have already updated the paper in this workshop submission to address your comments. Our responses are shown as follows (some of them are the same as our previous comment):\\n\\n1. About section 3.1 (section 2.1 in this version)\\nWe have updated our writing. However, we still think aligning the distribution of training data is not just a side effect. It is the key way to achieve the purpose of BN which is to avoid the problem of vanishing gradients and help optimization. In original BN paper, the authors\\u2019 motivation is to address the problem of \\u201cinternal covariate shift\\u201d, which means \\u201cthe change in the distributions of layers\\u2019 inputs\\u201d. Thus, BN is proposed to \\u201creduce internal covariate shift\\u201d and make \\u201cthe distribution of nonlinearity inputs more stable\\u201d.\\n\\n(2) We also directly visualize the intermediate features with Inception-BN network instead of our BN features. The figure can be seen at this link (https://s30.postimg.org/fdamc2l1t/a2d_feature_tsne.png). Red circles are features of samples from training domain (Amazon) while blue ones are testing features (DSLR). It blends much more than that in Figure 1. This demonstrates the statistics of BN layer indeed contain the traits of the data domain. The features of intermediate features of CNN cannot be separated directly in terms of different domains.\\n\\n2. About section 3.3, section 4.3.1\\nWe have revised section 3.3 to make it clearer and we have removed the previous section 4.3.1. \\n\\n3. About section 4.3.2 (section 5.3.3 in this version)\\nWe have updated additional experimental results in the workshop submission. \\n(1) We have experiments with smaller number of samples and found that the performance will drop more (e.g. 0.652 with 16 samples, 0.661 with 32 samples.) We have updated the results in the section \\u201cSensitivity to target domain size\\u201d.\\n(2) In the section \\u201cAdaptation Effect for Different BN Layers\\u201d (section 5.3.4), we add the detailed analysis of adaptation effect for different BN layers of our AdaBN method.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
ByXrfaGFe
Transferring Knowledge to Smaller Network with Class-Distance Loss
[ "Seung Wook Kim", "Hyo-Eun Kim" ]
Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset.
[ "knowledge", "loss", "smaller network", "original network", "network", "small capacity", "larger capacity network", "important problem", "real life applications", "fast inference time" ]
https://openreview.net/pdf?id=ByXrfaGFe
https://openreview.net/forum?id=ByXrfaGFe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Byc9s1U5e", "S13f_tTjl", "H14hj1Zox", "S1Ph1Zr5l", "BkvUXIlse", "H1lqJpgqg" ], "note_type": [ "official_comment", "comment", "comment", "comment", "official_review", "official_review" ], "note_created": [ 1488481170048, 1490028563775, 1489202092428, 1488420783031, 1489163086754, 1488142216259 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper36/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Seung_Wook_Kim1" ], [ "~Seung_Wook_Kim1" ], [ "ICLR.cc/2017/workshop/paper36/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper36/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"thanks\", \"comment\": \"Thanks for the response. It might be worth having the cross-entropy results in there as well for reference.\\nThe proposed method seems even better in light of the fact that the usual cross entropy knowledge distillation does not work in this case.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Thank you for your review.\", \"questions\": \"- What is the performance of the Teacher on CIFAR10?\\n 110-layer 'Baseline' and 'Class-distance loss' resnets in the table 1 refers to the performances of teacher models.\\n\\n- Did you compare with knowledge distillation baseline (that matches softmax logits of teacher and student networks) ?\\n Yes. We couldn't get the error rate go down below 9% by training a student model with traditional cross-entropy transfer\\n \\n- Do you use Batch Normalization in your residual networks?\\n Yes. All resnets are trained with batch normalization.\"}", "{\"title\": \"reply to AnonReviewer2\", \"comment\": \"Thank you for your comment.\\n\\nRegarding your question, TF-baseline refers to student model trained with feature vector transfer. We couldn't get the error rate go down below 9% by training a student model with traditional cross-entropy transfer (stated in your comment). This is expected as previous works (Srivastava et al. (2015) and Chen et al. (2016)) indicated that the cross-entropy transfer strategy did not outperform baseline networks trained from scratch where baseline networks are sufficiently deep neural networks with strong regularizers such as batch-norm.\\n\\nWe'll add suggested citations as well.\"}", "{\"title\": \"Promising Work\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper investigates Knowledge Distillation for network compression. In their approach, the authors propose to match feature vector of the sofmax preactivation. In addition, they introduce a new loss function for training the teacher, i.e. they add a regularisation term so that class-wise clusters of feature vectors are more dense.\\n\\nAuthors evaluate their approach on the CIFAR10 dataset using Resnet for both teacher and student. Contrary to previous approaches using Knowledge Distillation, they show that their approach is able to leverage the Teacher to improve the Student performances with such network architectures.\", \"questions\": [\"What is the performance of the Teacher on CIFAR10?\", \"Did you compare with knowledge distillation baseline (that matches softmax logits of teacher and student networks) ?\", \"Do you use Batch Normalization in your residual networks?\"], \"pros\": [\"The paper is clear an easy to follow\", \"Authors show that Knowledge Distillation is useful for recent network architecture (Resnets).\"], \"con\": \"- Experiences on only one dataset.\\n\\n\\nI recommend acceptance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"good paper\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"In this work, the authors propose to transfer knowledge from a teacher model to a smaller student model with two variations:\\n1) knowledge is transferred by matching the feature vector before the softmax \\n2) the teacher is trained with an additional regularization term to make the feature vectors more dense within the same class.\\n\\nThis is a solid piece work that should be accepted. One question:\\n- Does the TF-baseline refer to student model trained with traditional cross-entropy knowledge transfer? or the feature vector transfer? If the latter, can you please have additional baseline numbers for student models trained with (standard) cross-entropy loss transfer?\", \"minor_comments\": [\"Citations: Should really cite Bucila et al. 2006 for knowledge distillation and LeCun et al. 1990 (Optimal Brain Damange) for model compression, as these predate some of the more recent work (Hinton 2015, Han 2016, Jaderberg 2014, etc.)\", \"\\\"Mimic learning\\\": probably best just to stick to \\\"knowledge distillation\\\"\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HyDt5XMKg
Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans
[ "Ramin M. Hasani", "Magdalena Fuchs", "Victoria Beneder", "Radu Grosu" ]
Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms.
[ "Theory" ]
https://openreview.net/pdf?id=HyDt5XMKg
https://openreview.net/forum?id=HyDt5XMKg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SyKtffn9g", "ByM0KwK5g", "S19b8vT5g", "rk_fOYTjl", "Bk0nOF45e" ], "note_type": [ "official_review", "comment", "comment", "comment", "official_review" ], "note_created": [ 1488884353150, 1488710089662, 1488971266488, 1490028559795, 1488390325871 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper30/AnonReviewer1" ], [ "~Ramin_M._Hasani1" ], [ "~Ramin_M._Hasani1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper30/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Difficult to understand the primary contribution here\", \"rating\": \"3: Clear rejection\", \"review\": \"This submission is difficult to review, since it leaves the reader in suspense as to what the specific contributions it makes are.\\n\\nThe authors model the circuity involved in C. elegans mechanosensory habituation. However, they don't appear to provide many specifics of their model, rather almost all the space is taken with background information. The key findings do not appear to be novel (neurons can have state that is modified by history of the neuron's experience), and are assumptions of their model (based on known experimental results), so it is unclear that they are significant new contributions (given the paucity of details about their specific approach, it is difficult to judge).\", \"reasons_to_accept\": [\"The authors promise that their approach may give rise to new \\\"bio-inspired learning algorithms.\\\"\", \"They provide a good background regarding C elegans and habituation.\"], \"reasons_to_reject\": [\"Submission is unclear about their model or specific contributions.\", \"Although the authors make a reference to this work leading to \\\"better learning algorithms,\\\" no specifics are provided. This paper doesn't appear to have much connection with representation learning and may be more suited for a different venue.\", \"Abstract advertises insights that give rise to \\\"new bio-inspired learning algorithms\\\" but doesn't appear to provide any general insights into learning.\"], \"minor_issue\": \"there are a number of grammatical errors.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Biological Learning Principles Abstract\", \"comment\": \"Thank you for your review and comments.\", \"i_would_like_to_add_some_comments\": \"-\\tReviewer is fully aware of the fact that explaining the details of the equations used for synaptic connectivity and neural dynamics, requires way more that 3 single-column pages. Authors attempted to provide a compact clear picture over the idea of sources of biological learning and simultaneously pointed out several notes and findings which can build up well-founded understanding for readers with any areas of expertise. \\n\\n-\\tAuthors intentionally structured the paper to make it possible for the ICLR readers which are mostly computer scientists, get connected to the overall picture of the non-associative learning mechanism in C. elegans, within the extended abstract, and provide the details of the equations in the poster and discuss it interactively during the workshop.\\n\\n-\\tWe believe that the reviewer is totally right about including the equations. Accordingly, I mention some of them here and would demand the reviewer\\u2019s opinion about how to integrate them inside the text with their proper explanation? \\n\\nWe have mentioned within the text of the abstract where we can modify in the neuron model in order to create the effect of gene modifications in a sensory neuron habituation. examples include:\\n\\n1)\\tConductance of K-Channel decreases over time, due to the gene functions described in [1]. We then proposed to have the following expression for the maximum conductance of the potassium channel, G_K:\\n\\nG_K, is set to a dynamic variable expressed as follows:\\nG_K = 10 exp(-0.02*t) +3, \\nwhere parameters are determined empirically.\\n\\nWe also hypothesised that the calcium pump plays a key-role in the suppression of the calcium level in the sensory neuron. For the Calcium pump, its maximum conductance has been set to a dynamic variable in a sigmoid-like function: \\nG_pump = 10 / (exp(-0.01(t+200)) + 1)\\n\\nFurthermore, We hypothesised that an inactivation-calcium gate should play a role in the learning mechanism. \\nTherefore, we design an inactivation gate, h, as follows: \\n\\ndh/dt = (h_inf \\u2013 h) / tau_h,\\nwhere h_inf = 1 / 1 + H * exp((v-v_half)/k_h),\\nwhere the h_inf is the steady state value of the inactivation gate, with gate rate parameters H, v_half and k_h. \\nv_half = -45 mV, H = 1/, tau_h = 2 s, k_h =1 1/mV.\\nFigure 1C in the paper is generated by including all the three dynamics described above, within the model of the neuron.\\n\\n2)\\tWe have also mentioned that considering S(t) and G_max of a synapse and their modifications, result in similar habituation and dishabituation behavior on a postsynaptic cell observed in the experimental results:\", \"the_overall_synaptic_current\": \"I_syn = G_max G(V_pre) S(t) (E_Syn \\u2013 V_Post)\\nWhere G(V_pre) = m(t) \\nAnd dm/dt = (m_inf \\u2013 m) / tau_m\\nAnd m_inf = 1/(exp((V_shift \\u2013 V_pre)/V_range) +1)\\nAnd S(t) = n(t). s(t)\\nAnd dn/dt = (n_inf \\u2013 n). k_n \\u2013 n.k_r \\u2013 n. m(t)\\n\\nn(t) describes the amount of available neurotransmitter vesicles. With each firing of the neuron, n \\u00b7 m vesicles are removed. Vesicles are refilled from a reserve pool with a rate k_n and move to the reserve-pool with a rate k_r. This type of model is described in [2]. With the right choice of parameters (Below), this leads to a decrease in the postsynaptic signal after a series of pulses. Without stimulation, the signal strength recovers over time.\\ns(t) is modelled in the following and provides the probability of the neurotransmitters arriving at the postsynaptic receptors [3]:\\nds/dt = -s/tau_F + h\\ndh/dt = -h/tau_R \\u2013 h0 . delta(t-t0)\\nwhere t0 is the time of the beginning of neurotransmitter release. \\n\\u2022\\tParameters of m(t): tau_m = 5 ms, V_range = 4 mV, V_shift = -30 mV.\\n\\u2022\\tParameters of S(t): n_inf = 10000, k_r = 0.01 1/ms and k_n = 0.08 1/ms, tau_R = 2.5 ms, tau_F = 5ms, h0 = 10, t0 = recorded at V_pre > 59mV, \\n\\nThis is part of the analyses we have conducted fully quantitative. We kept the paper in a high-level description for the readability. Accordingly, we targeted to include all these mathematical descriptions within the poster we provide there at the workshop.\\n \\nI would sincerely ask the reviewer to reevaluate our work, given our recent comments.\\nThank you very much for your kind consideration.\\n\\n\\nReferences\\n\\n[1] Shi-Qing Cai, Yi Wang, Ki Ho Park, Xin Tong, Zui Pan, and Federico Sesti. Auto-phosphorylation of a voltage-gated k+ channel controls non-associative learning. The EMBO journal, 28(11):1601\\u20131611, 2009. \\n[2] David Sterratt, Bruce Graham, Andrew Gillies, and David Willshaw. Principles of computational modelling in neuroscience. Cambridge University Press, 2011. \\n[3] Erik De Schutter. Computational modeling methods for neuroscientists. The MIT Press, 2009.\"}", "{\"title\": \"Re: Difficult to understand the primary contribution here\", \"comment\": \"Thank you very much for your comments and review.\", \"i_would_like_to_add_some_key_comments_in_defense_of_our_work\": \"- I will highlight the key contributions of our work and stress on the fact that our presentation well-suits the ICLR venue: \\n\\n-\\tWe provided a clear-compact overview on the mechanisms of the non-associative learning within the nervous system of the C. elegans. Within the first part of the paper, we built up several insights towards understanding the mechanism of learning by provided key notes on the structure and dynamics of the nervous system during the learning process. \\n\\n-\\tWe have constructed a detailed mathematical simulation platform for our analyses and tried to draw a general overview on the principles we found by using our simulator, in such a compact report. However, as the reviewer is fully aware, explaining the details of a neuronal model and synaptic connectivity models is extremely difficult in such a compact version. We therefore, structured our work to be understandable for a larger audience who are not much familiar with the field. We also planned to include the details of the equations and analyses, within the poster-session at the ICLR workshop in order to establish a clear picture on our novel work, interactively. \\n\\n-\\tThe first Key finding states the novel fact that an additional layer of input neurons can be placed in a network and their properties and state can depend on the structure of the input features and data. The second key finding states that synapses can have states and that can significantly changes the behavior of the entire network. We have precisely followed the effects of the gene modifications on the global behavior of the worm in several biological experiments and correspondingly added suitable dynamics variables in the model (which were noted within the text). Figures illustrates a small example of such experiments and comparison. \\n\\n-\\tWithin the paper, we stated that our findings may lead to new learning algorithms. This is explained in the text within the concepts introduced in the first part of the paper as well as the key findings. The reader can potentially get inspired to include our findings in the existing learning algorithms and correspondingly improve the quality of the learning. This is of course on the priority-list of our future works.\\n\\n-\\tThe workshop track of ICLR this year \\u201cwill focus and favor\\u00a0late-breaking developments and very novel ideas.\\u201d I believe that designing a simulation platform with which one can easily turn attractive behavioural features such as learning various representations, to mathematical equations and useful conclusions, can be extremely interesting for the well-regarded ICLR Audience.\\n\\nFurthermore, our extended abstract tries to introduce a novel principle on modelling the sources of learning in the brain of C. elegans considerably compact. The topic can easily get sophisticated to comprehend for the majority of the ICLR audience without providing proper background information. We therefore attempted to provide a high-level background description on the topic, while including novel findings even within the introductory part. examples include:\\n\\n \\\"Sensory (input) neurons within the network are subjected to a mediation during the non- associative training process (repeated tap stimulation) (Kindt et al., 2007). \\\" \\nThis indicates that one can set a layer of input-neurons which their threshold of activation is tunable depending on the type of the input features to be learned. \\n\\n\\u201cWithin a neural circuit, only some of the interneurons are proposed to be the substrate of memory (Sugi et al., 2014).\\u201d\\nThis implies that only some neurons within the nervous system can have states and some of them are stateless. That makes the process of learning faster and more efficient. Like other biological sources of optimal networks such as beta cell hubs in islet functional architecture [1], C. elegans\\u2019 brain network consists of hubs (neurons with states) which are actively involved in learning.\\n\\n\\nI would like to sincerely ask the reviewer to reevaluate our short paper, given our recent comments. \\n\\nThank you very much for your consideration.\", \"references\": \"[1] Johnston, Natalie R., et al. \\\"Beta cell hubs dictate pancreatic islet responses to glucose.\\\"\\u00a0Cell Metabolism\\u00a024.3 (2016): 389-401.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Learning without learning equation\", \"rating\": \"3: Clear rejection\", \"review\": \"It is nice to see some discussion of the biological underpinning of learning, and C.elegans is indeed a great model system. It is also very tempting to hear about modeling genetic mechanisms and I was really intrigued by the abstract. Unfortunately, the basic think that of explaining the way S(t) depends on the experiments is completely omitted. It is clear if the synaptic current are modified by a variable that the resulting behaviour of the postsynaptic neuron can be modified, but not showing this model instead of the well known conductance model itself is disappointing.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rJT7bB4Kx
Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation
[ "Jean-Benoit Delbrouck", "Stephane Dupont" ]
In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation. In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods.
[ "Natural language processing" ]
https://openreview.net/pdf?id=rJT7bB4Kx
https://openreview.net/forum?id=rJT7bB4Kx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "ByJvq0kcx", "r1xVdF6je", "SJN0KBKox", "SJu_HBtox", "rkNm6SEsl", "B17g-Uesg" ], "note_type": [ "official_review", "comment", "comment", "comment", "comment", "official_review" ], "note_created": [ 1488083542614, 1490028583877, 1489750476056, 1489749360098, 1489423643650, 1489162474784 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper75/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Jean-Benoit_Delbrouck1" ], [ "~Jean-Benoit_Delbrouck1" ], [ "~Desmond_Elliott1" ], [ "ICLR.cc/2017/workshop/paper75/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Sensible idea, but comparison to related work insufficient\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"For the task of translating a sentence describing an image from one language to another language with the image as additional input for the translation task, the paper uses multimodal attention. For the multimodal attention, the paper explores to use Multimodal Compact Bilinear Pooling (MCB) [Fukui 2016].\", \"strength\": [\"Using MCB for this task seems to makes sense has not previously explored to my knowledge and slightly improves performances.\", \"The paper evaluates the task and ablations on the Multi30k dataset.\"], \"main_weaknesses\": \"\", \"discussion_and_comparison_to_related_work\": \"1.\\tThere has been a large number of works looking at the multimodal translation problem, e.g. [Elliott 2015], [Iacer 2016], but the paper reads like, it is the first work looking at this problem. Specifically, the model from [Iacer 2016] is very similar to this work, apart from MCB.\\n2.\\tPlease cite prior work more precisely: The work misses the citation for tensor sketch algorithm from [Pham and Pagh 2013]; specifically also in Figure 1, where the visualization and algorithm seems to be based on Fukui 2016.\\n3.\\t [Iacer 2016] also reports the results of using Moses, a statistical machine translation pipeline, which does not use the image an achieves 52. Meteor, higher than any result reported in this work.\\n4.\\tSee for https://staff.fnwi.uva.nl/s.c.frank/mmt_wmt_slides.pdf for many more results on the same dataset and task, many approaches achieving > 50 METEOR.\", \"further_weaknesses\": \"1.\\tPlease cite the actual publications not the arXives, whenever available.\\n\\nWhile the paper integrates MCB [Fukui] in multimodal translation, which has not been done before to my knowledge, the paper significantly lacks coverage and comparison to related work, making it not acceptable in this form. Most notably, the approach is very similar to [Iacer 2016], apart from using MCB, but the paper does not cite [Iacer 2016].\", \"references\": \"[Pham and Pagh 2013] Ninh Pham and Rasmus Pagh. 2013. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD \\u201913, pages 239\\u2013247, New York, NY, USA. ACM. \\n[Elliott 2015] Elliott, Desmond, Stella Frank, and Eva Hasler. \\\"Multilingual Image Description with Neural Sequence Models.\\\"\\u00a0arXiv preprint arXiv:1510.04709\\u00a0(2015).\\n\\n[Iacer 2016] Calixto, Iacer, Desmond Elliott, and Stella Frank. \\\"Dcu-uva multimodal mt system report.\\\"\\u00a0Proceedings of the First Conference on Machine Translation, Berlin, Germany. 2016.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"No title\", \"comment\": \"Dear reviewer,\\n\\nThank you for your comment.\\n\\nWe agree that our statement you quote is incorrect and has been taken out of our draft. The \\\"previous work\\\" has also been updated according to your comments.\\n\\nAs stated in our first review's response (see below), our main focus wasn't to compare our work to any monomodal or multimodal baseline but rather to show that more complex combination techniques help a system to translate better. Yet, we agree that to compare our work to others, and therefore to give it a more significant impact, we should have used state of the art models like [Iacer 2016].\\n\\nAlso, we'll try to enhance the explanation of the proposed attention models in this paper by reducing the basic model section, which could be easily shortened.\\n\\nBest,\"}", "{\"title\": \"No title\", \"comment\": \"Dear reviewer,\\n\\nThank you for your helpful comment. \\n\\nThe missing references you pointed out has been added to the paper. \\n\\nI understand that a weakness is the low performance (Bleu scores) reported in our work. The main difference is that we dont use dropout which seems to significantly improve the translations. \\nOriginally, our main focus wasn't to propose a state of the art model, but rather to show that combining multimodal attention vectors with more complex techniques actually improves the scores, wether or not they are state of the art. Yet, we agree that to compare to previous work, a similar model like [Iacer 2016] should have been used.\\n\\nAll your further comments, like the lack of precision whilst citing previous work has been taken into account. Our workshop draft has been updated to address these points.\"}", "{\"title\": \"Experimental protocol and a suggestion\", \"comment\": \"I like this approach to training a multimodal translation model but the results are difficult to interpret, given the details in the paper.\\n\\nI encourage you to follow the Shared Task evaluation procedure for measuring the BLEU scores on the test data. This procedure is described on the Shared Task web page (http://www.statmt.org/wmt17/multimodal-task.html) with hyperlinks to the processing scripts. If you follow this procedure, it will make it easier to compare your against other papers.\", \"i_also_have_a_suggestion\": \"you may want to use a decompounder on the German vocabulary. 19,000 types is quite high for the German dataset, and this could be reduced to ~ 15,000 by following the exact preprocessing steps described in Caglayan et al. (WMT 2016). A reduced German vocabulary should give you better BLEU scores because the model will be easier to train. You could also think about using the Moses compounder (Koehn et al. (2007)), Byte Pair Encodings (Sennrich et al. (2016)), or a pretrained neural decompounder (Daiber et al. (2016)).\\n\\nDaiber et al. (2015) http://jodaiber.github.io/doc/compound_analogy.pdf\\nKoehn et al .(2007) http://www.aclweb.org/anthology/P07-2045\\nCaglayan et al. (2016) https://arxiv.org/abs/1605.09186\\nSennrich et al. (2016) https://arxiv.org/abs/1508.07909\"}", "{\"title\": \"results are not consistent with prior work\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper investigates the problem of combining variable-length information from two different modalities. The specific method considered in the paper is compact bilinear pooling, which is compared against simpler methods in the context of multimodal machine translation. Two versions of the algorithm are considered, which differ in whether the information extracted from text influences the computation of attention weights for the elements of the representation of the image.\\n\\nAs mentioned by another review, a major issue of the paper is that that the prior work by [Calixto 2016] is not mentioned. Besides the statement \\\"To our knowledge, there is currently no multimodal translation architecture that convincingly surpass [sic] a monomodal attention baseline\\\" contradicts the results reported in [Calixto 2016]. They do report an improvement over the text-only NMT. This makes it hard to trust the results of this paper.\\n\\nThe writing of the paper could be improved. A lot of space is used to explain the basic model, but the proposed methods are explained extremely briefly. A few sentences explaining the compact bilinear pooling could help. The Algorithm 1 is not very helpful without any explanation. Most importantly, the explanation of the pre-attention mechanism, which is perhaps is the main novelty, is very vague.\", \"typos_and_minor_writing_issues\": [\"bottom of page 2: c_t^t is rather confusing notation\", \"bottom of page 2: \\\"in a multiplicative but\\\" - a word is missing\", \"beginning of Section 3: I believe it should be \\\\alpha instead of \\\\epsilon, and it makes sense to say \\\"learning rate \\\\alpha\\\" to prevent confusion\"], \"pros\": \"the idea of pre-attention seems novel\", \"cons\": \"results are not consistent with the prior work (which has not been mentioned), writing is not clear\\n\\n[Calixto 2016] Calixto, Iacer, Desmond Elliott, and Stella Frank. \\\"Dcu-uva multimodal mt system report.\\\" Proceedings of the First Conference on Machine Translation, Berlin, Germany. 2016.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Syw2ZgrFx
Reinterpreting Importance-Weighted Autoencoders
[ "Chris Cremer", "Quaid Morris", "David Duvenaud" ]
The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, and visualize the implicit importance-weighted approximate posterior.
[ "Unsupervised Learning" ]
https://openreview.net/pdf?id=Syw2ZgrFx
https://openreview.net/forum?id=Syw2ZgrFx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "r1Smkgfoe", "SytDdtpjl", "rJ77QL8ix" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489268509028, 1490028640789, 1489556250633 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper160/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper160/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Simple and clear result, useful insights\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors describe a simple and clear reinterpretation of importance\\nweighted autoencoders (Burda et al., 2016). I recommend this paper for\\nacceptance. It connects to much recent work on expressive variational\\napproximations, especially in those leveraging truncated Markov chains\\nas variational approximations. Further, it brings interesting ideas to\\nthe table following this simple derivation.\\n\\nA cool result is that this interpretation relaxes the idea of IWAEs to\\nbe more broadly applicable to any divergence measure. Perhaps a key\\nexperiment would not be so much comparing IWAEs with itself, but in\\nwhat this perspective allows, such as IWAE-based variational families\\nwith alpha-divergences or operator variational objectives. Or alternatively,\\ncombining the SIR approach with other rich posterior approximations.\\n\\nWith this perspective in mind, it's not necessarily clear if one\\nshould even use IWAEs over other expressive variational\\napproximations. From my understanding of the field, most benchmarks\\ndisplay IWAEs performing worse (in terms of held-out log-likelihood)\\nto others such as the variational Gaussian process (Tran et al.,\\n2016) and inverse autoregressive flows (Kingma et al., 2016).\\nThis isn't a fault of this paper\\u2014it's great that the casting brings\\nthese questions to bear\\u2014but I think it's something the paper should\\ncertainly address if it aims to be more substantial in an extended\\npaper in the future.\\n\\nThe notation is not described in the paper; while experts in the field\\ncan understand this, the work would benefit from properly laying out\\ndefinitions and properties.\\n\\nReferences\\n+ Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2017). Density estimation using Real NVP. Presented at the International Conference on Learning Representations.\\n+ Kingma, D. P., Salimans, T., & Welling, M. (2016). Improving Variational Inference with Inverse Autoregressive Flow. Presented at the Neural Information Processing Systems.\\n+ Li, Y., & Turner, R. E. (2016). R\\u00e9nyi Divergence Variational Inference. Presented at the Neural Information Processing Systems.\\n+ Ranganath, R., Altosaar, J., Tran, D., & Blei, D. M. (2016). Operator Variational Inference. Presented at the Neural Information Processing Systems.\\n+ Tran, D., Ranganath, R., & Blei, D. M. (2016). The Variational Gaussian Process. Presented at the International Conference on Learning Representations.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"IWAE bound derived as VAE bound with particular implicit distribution q_IW\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper introduces a new perspective on IWAE.\\nIt is shown that the IWAE bound can be interpreted as a VAE bound with a particular implicit inference model q_IW. This implicit posterior distribution is a function of both the variational parameters, and the generative model parameters.\\n\\nThe derivation seems novel, and adds an interesting new link to IWAE and VAE objectives.\\n\\nA potential drawback of the IWAE posterior, in comparison to alternative methods for building complex posteriors, is that it is relatively expensive; you may require a large number of samples to converge to the true distribution, probabily especially so in high dimensional space. However, that's besides the point of this paper, and I think it still proposes a valid and interesting perspective on IWAE.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkyScySKl
Joint Embeddings of Scene Graphs and Images
[ "Eugene Belilovsky", "Matthew Blaschko", "Jamie Ryan Kiros", "Raquel Urtasun", "Richard Zemel" ]
Multimodal representations of text and images have become popular in recent years. Text however has inherent ambiguities when describing visual scenes, leading to the recent development of datasets with detailed graphical descriptions in the form of scene graphs. We consider the task of joint representation of semantically precise scene graphs and images. We propose models for representing scene graphs and aligning them with images. We investigate methods based on bag-of-words, subpath representations, as well as neural networks. Our investigation proposes and contrasts several models which can address this task and highlights some unique challenges in both designing models and evaluation.
[ "scene graphs", "images", "text", "task", "models", "joint embeddings", "images joint embeddings", "representations", "popular", "recent years" ]
https://openreview.net/pdf?id=BkyScySKl
https://openreview.net/forum?id=BkyScySKl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "S1y76deie", "BynU_tTog", "S1VYt7lcl" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489173782988, 1490028627677, 1488103804323 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper143/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper143/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper investigates a set of simple models for generating scene and image graph embeddings. Scene graphs are represented either with count features on their constituent nodes, count features on their constituent nodes and short paths, or convolutionally. A margin objective is then used to learn projections from the space of image features and graph representations into a joint embedding space. This paper finds that on a dataset of scene graphs, the representation based on path counts outperforms the other two approaches both in identifying similar images to the one for which the graph was annotated, and in retrieving the target image.\\n\\nThis is a clean, focused, and well presented contribution. It's an interesting result that the approach based on path counts outperforms the convolutional / GraphNN approach---it seems like count-based models have generally been on the way out (at least in machine translation and language modeling). Presumably it's the relatively small size of the training data that makes them still useful here. It might be useful to mention how you think this approach might scale to larger datasets. How big is the object vocabulary? How many subpaths occur in the test set but not the training set? Are you doing anything else (e.g. smoothing, backoff) to deal with count sparsity?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Strong baselines for scene graph prediction\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The submission studies scene graph prediction via joint embeddings of images and graphs. It evaluates two different embeddings (one of which is a simple baseline). The graph embeddings are evaluated in ranking experiments. Interestingly, a representation that is essentially a \\\"bag of small subgraphs\\\" performs very competitively; it substantially outperforms graph network representations.\\n\\nScene graph prediction will likely become an increasingly important topic in computer vision, and this submission presents some string baselines for the problem. Having such baselines is extremely important: so, even though the paper does not introduce new algorithms, I would recommend that this submission is accepted.\\n\\nIt would be interesting to see to what extent these results generalize to larger datasets that have a long tail of relationship and node types, such as VisualGenome; I encourage the authors to perform such experiments for a future full version of this paper.\", \"the_submission_should_probably_cite_this_related_work\": \"https://arxiv.org/pdf/1701.02426.pdf\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkDDM04Ke
Conditional Image Synthesis With Auxiliary Classifier GANs
[ "Augustus Odena", "Christopher Olah & Jonathon Shlens" ]
Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128 × 128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially resized 32 × 32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.
[ "samples", "gans", "conditional image synthesis", "auxiliary classifier gans", "challenge", "machine learning", "new methods", "improved training" ]
https://openreview.net/pdf?id=BkDDM04Ke
https://openreview.net/forum?id=BkDDM04Ke
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rkafBhgsg", "BykxQ0C9g", "r1tHOKpje" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489188117339, 1489064678637, 1490028609241 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper116/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper116/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Not much different\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This work proposes to add a class label to both the generator and discriminator of the GAN network. This is intuitive, but is NOT novel. Conditioning the posterior distribution on the class label is an old idea. I also agree with the other reviewer that filling the appendix with a lot of new and relevant content is poor form.\\n\\nThe presentation is a bit sloppy. The curves in Figure 4 are missing a legend.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Uninsightful and in need of much more work\", \"rating\": \"2: Strong rejection\", \"review\": \"Let me preface my review by saying that I didn\\u2019t read the appendix because I think it is bad form to add a paper worth of additional material to what is supposed to be an extended abstract, and the main text unfortunately did not inspire me to read further either.\", \"the_authors_propose_to_combine_two_ideas_for_improving_generative_modeling_with_gans\": \"conditioning the generator on class labels and training the discriminator to reconstruct the labels.\\n\\nGiven that both ideas are simple and have been used in isolation, the project has little to offer conceptually. This could still be an interesting abstract if it evaluated well the effect of combing both ideas. Unfortunately, this does not seem to be the case. \\n\\nAny evaluation based on samples is necessarily very limited, as a model which simply stores the training data will score as well as the true distribution of natural images. A more useful comparison would have been to compare samples of two generators with the same architecture and trained on the same data, one trained with the proposed changes and one without.\\n\\nThe value of the analysis in Figure 2 is not at all clear to me. Showing the effect of throwing away high-spatial frequency information tells me that the classifier is using that information, and that the generator is not merely interpolating low-resolution images. But it tells me very little about the effectiveness of the proposed changes to GAN training.\\n\\nThe paper also seems sloppily written. E.g., in the introduction the authors claim that Balle et al. (2015) describe an advance in the state of the art in image denoising. Looking at the paper I couldn\\u2019t find such a claim or a comparison to the state of the art (non-parametric methods such as BM3D and discriminative methods such as feed-forwardly trained neural nets). The authors cite Toderici et al. (2016) as an example of image models advancing compression, but to my knowledge this paper uses binarized hidden states of a recurrent neural network and no image model.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
BJdmMd4Yg
Who Said What: Modeling individual labelers improves classification
[ "Melody Y. Guan", "Varun Gulshan", "Andrew M. Dai", "Geoffrey E. Hinton" ]
Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy.
[ "Computer vision", "Deep learning", "Supervised Learning" ]
https://openreview.net/pdf?id=BJdmMd4Yg
https://openreview.net/forum?id=BJdmMd4Yg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B1sFvSbie", "ByBNOYase", "S17AN1Vil", "HkotYbjil", "rkhWNJ-jg" ], "note_type": [ "comment", "comment", "official_review", "comment", "official_review" ], "note_created": [ 1489225603147, 1490028588700, 1489396939463, 1489865090557, 1489200131965 ], "note_signatures": [ [ "~Melody_Yun_Jia_Guan1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper83/AnonReviewer2" ], [ "~Melody_Yun_Jia_Guan1" ], [ "ICLR.cc/2017/workshop/paper83/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks so much to the reviewer for their time and comments! Responses to the three details you mentioned are below:\\n\\n1. \\n\\n\\\"how could this be detected, since they may have provided a disproportionate number of labels in the test set as well?\\\"\\n\\nThey do not provide a disproportionate number of labels in the test set because the doctors used in training/validation are disjoint from the doctors used in the test set.\\n- \\\"3 retina specialists graded all images in the test dataset, and any disagreements were discussed until a consensus label was obtained\\\" (Appendix C)\\n- \\\"we remove grades of doctors who graded test set images from training and validation sets to reduce the chance that the model is overfitting on certain experts.\\\" (Appendix E)\\nFor additional clarity we updated the paper to include this second point from Appendix in page 2 paragraph 1 as well (see revision).\\n\\n\\\"Is there any rebalancing between doctors (as opposed to classes)?\\\"\\n\\nWe do not rebalance between doctors. In a sense this is an implementation choice (i.e. it is reasonable to try rebalancing the doctors) but we also felt that it was better to allow doctors who labelled more examples to have more say. This is because we can create better models for doctors with more data, which means that a) all else being equal, their models will have better predictions, and b) their models' reliabilities can be more confidently estimated so if a doctor is bad this will be reflected in its weight. Also note that the baseline of using the average labeler opinion favors more frequent labelers as well so this is not a phenomenon limited to our approach.\\n\\n2. \\n\\nPlease note that the test distribution is not assumed to be known (it would indeed be a questionable assumption)! Rather, \\\"Our assumed test class distribution for computing the log prior correction was the mean distribution of all known images (those of the training and validation sets)\\\" (page 8, paragraph 2). Also yes, all baselines in comparisons use this adjustment. \\n\\n3. \\n\\nWe have updated page 3 paragraph 1 to include the formula for additional clarity (see revision).\\n\\nThanks again! We hope that this resolves all your concerns!\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Empirical results on an unique dataset using mixing of experts\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Despite the interesting results, the paper is a very empirical paper on mixing of experts. Mixing of experts is a very old issue. Authors should cite some references around mixing of experts that would help the reader to understand the problem at hand and to asses the contributions.\\n\\nHonestly i do not understand all the nets proposed (Figure 4, doesn't help me). In general section 3 should be improved. \\n\\nAround the idea of mixing of experts i remember a paper that was published in Nature (i think so) where the authors proposed a very interesting idea for mixing experts beyond the typical weight associated to the expert reliability. The authors propose to ask an additional question to the experts about what they think the other experts are going to answer. And use this additional question to detect where an expert is a good expert. For instance, when one expert is sure about his decision but at the same time he knows that the problem is hard he thinks that the other experts (or some group) are going to fail and then he is going to claim that his answer is A but others experts answer is going to be B. \\n\\nIn my opinion, these are the things that would be interesting to explore, beyond weighting opinion. Perhaps a NN could help to solve this problem without that additional question. Perhaps this paper is in that direction but sorry i couldn't understand it.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you very much for your time and review! We have tried our best to to incorporate your feedback and clarify things for you.\\n\\nWe do think it would be a good idea to add references; we have revised our paper to discuss prior literature in crowdsourcing which deals with the same problem space tackled by our paper. Refer to the the \\\"Estimating doctor reliability with EM\\\" paragraph of section 3.\\n\\nWe would also like to clarify that our work is distinct from the usual \\\"mixture of experts\\\" papers (MoE concept introduced by [1,2]). **Usual \\\"mixture of experts\\\" models are not about modeling individual experts, but rather training latent experts on the same data.** (In more detail: These latent experts are trained using a training set where each data point has a single label and there is also no information on the origin of the label. Our paper concerns datasets labelled by multiple *observed* experts where each data point has multiple overlapping labels from a subset of the experts. In this context we are combining experts in a way not explored before, learning from the identity of individual experts by modeling them (with each modeled expert trained on a restricted subset of the data), learning their specialties, and learning how to combine them.)\\n\\nThe Nature paper referenced is probably \\\"A solution to the single-question crowd wisdom problem\\\" [3]. We find this paper extremely interesting as well, but like the reviewer pointed out, it involves asking extra questions (what each expert thinks the popular opinion would be) and that is not feasible for existing large datasets which have already been labeled by several experts (often with huge expenses). Our goal was to develop a method that could be applied to existing labeled datasets, as is the case with the vast majority of real world datasets.\\n \\nTo help readers better understand the nets, we rewrote section 3. We also moved the paragraph on binary loss in section 4 to the appendix (Appendix J) in order to provide more space for section 3. But due to the 3-page limit for workshop papers, there was only so much more we could add, so Appendix D we also added 3 additional paragraphs of detailed explanation of the net with references to parts of Figure 4. We also provided the loss inputs in tabular form (previously this information was only provided in text from). We hope that these changes are helpful, and if the reviewer has any specific points of confusion we would be very happy to address those in further comments!\\n\\nHopefully our response helps the reviewer better understand the context and content of the paper. We believe our approach to be novel, simple and useful, and thank the reviewer for their helpful comments.\\n\\n[1] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. 1991. Adaptive mixtures of local experts. Neural Computing. 3, 1 (February 1991), 79-87\\n\\n[2] M. I. Jordan and R. A. Jacobs. 1994. Hierarchical mixtures of experts and the EM algorithm. Neural Computing. 6, 2 (March 1994), 181-214\\n\\n[3] D. Prelec, H. S. Seung, and J. McCoy. 2017. A solution to the single-question crowd wisdom problem. Nature. 541 (January 2017), 532\\u2013535\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"This work aims to improve classification accuracy in cases where there is high disagreement among labelers, some of which may be due to systemic differences between labelers. The general approach is simple and interesting, making separate predictions for each labeler individually, and averaging at test time. Weights to make this a weighted average are also learned, and two additional conditionings for the model are explored. A single dataset, to classify diabetic retinopathy, is explored.\", \"Overall I feel this is an interesting approach, though a few details could be better explained and justified, in my opinion:\", \"If a single doctor does more labeling than any other doctor, the majority vote may tend to favor this labeler (they have more chances to be in the majority). Would the learned weights then mostly just favor the most frequent doctor, and how could this be detected, since they may have provided a disproportionate number of labels in the test set as well? Is there any rebalancing between doctors (as opposed to classes)?\", \"The appendix mentions a step where the biases are adjusted to account for class frequencies in the test set. IMO this is a slightly questionable step, assuming that the test distribution is known, but this indeed may be the case in many situations. Also I'm supposing that all baselines in comparisons also used this adjustment -- is this the case?\", \"I feel the summary of the loss theta_ll' could be a bit clearer: What is the final loss exactly?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ByWhxeHtx
Bottom Up or Top Down? Dynamics of Deep Representations via Canonical Correlation Analysis
[ "Maithra Raghu", "Jason Yosinski", "Jascha Sohl-Dickstein" ]
We present a versatile quantitative framework for comparing representations in deep neural networks, based on Canonical Correlation Analysis, and use it to analyze the dynamics of representation learning during the training process of a deep network. We find that layers converge to their final representation from the bottom-up, but that the representations themselves migrate downwards in the net-work over the course of learning.
[ "Theory", "Deep learning" ]
https://openreview.net/pdf?id=ByWhxeHtx
https://openreview.net/forum?id=ByWhxeHtx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HJRpSdese", "HyR_KUgjx", "Skrvdtpog" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489171909924, 1489164662448, 1490028636780 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper155/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper155/AnonReviewer3" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Interesting approach to study dynamics of DNNs with a rather incomplete presentation\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This work studies similarities between data representations of DNNs during training using canonical correlation analysis (CCA). Authors present two conclusions based on this analysis framework. First, during training, the lower layers converge faster to the final distribution (up to affine transformation) compared to the upper layers. Then, authors observe that The final layer correlates more with the lower layers during the early stages of training compared to the final stages.\\n\\nThe observed properties are rather interesting and especially the second observation would be a quite surprising observation. It is known that for DNN training, the neural networks need to be over-parametrised, but little is known about the reasons why. The proposed explanation of a low-level image representations crawling down from upper layers sounds like an intriguing explanation, however it is not clear whether the observed effect is not only an artifact of the non-linear operation of the logit layer (as it seems from the Figure 1).\\n\\nFrom the technical perspective, the paper is really brief and unfortunately is missing out some important details (what final layer is used in the Figure 3 experiment, how are convolutional features handled, reason for non-symmetry of the tensors in Figure 1). The structure of the manuscript is rather unusual as it does not contain final discussion/conclusions.\\n\\nIn general, it is a quite interesting idea, however feels a bit unfinished. Furthermore, considering the goals of the ICLR Workshop, it does not seem to fall to any of the \\\"late-breaking developments, very novel ideas and position papers\\\" categories. If these requirements were relaxed and the work was a bit extended, I believe it would be an interesting workshop submission paper.\", \"pros\": [\"Neat and simple idea how to study properties of image representations during training\", \"Interesting perspective on the hidden units as vectors in function space which nicely fits to the CCA analysis\"], \"cons\": [\"Seems to be unfinished, missing some important details\", \"Unfortunately, does not fit the requirements of the ICLR 2017 Workshops\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting direction for future research, currently too preliminary for ICLR workshop focus areas\", \"rating\": \"3: Clear rejection\", \"review\": \"Thanks to the authors for sharing this technique and the direction they're heading with their research.\\n\\nThis work applies canonical-correlation analysis between layers of a deep neural network to its own intermediate layers, in a FCN and CNN setting. Through this visualization, the authors observe a bottom-up convergence pattern in two networks trained for classification, an FCN for MNIST and CNN for CIFAR-10. This is interpreted as the network learning converging to low-level representations quickly, and building upwards toward higher-level representations more slowly during training.\\n\\nThe authors also make an observation about what they describe as the \\\"1% rows / higher layers of the network\\\" being similar to their final representations. This is interpreted as the network learning final representations most quickly which are then \\\"squeezed from the top down\\\" to fit into lower layers through training.\\n\\nThis point is unclear, as there is no label corresponding to 1% rows on the diagrams, but it likely refers to the stage at 3% in the training where the \\\"out\\\" layer has correlation between 0.7 and 0.9 with all layers for the MNIST example, and 0.1- 0.65 in the CIFAR-10 example.\\n\\nSince the gradient signal is strongest at the top layer, the phenomenon may be simply a characteristic of gradient descent rather than a feature of representation learning by deep networks. Moreover, initialization and training algorithm will heavily influence this pattern in the visualization. These points are not explored in the current version of the paper, weakening the conjectures about representation learning by the network.\\n\\nCCA as a method of studying correlation patterns among layers in a deep network is interesting, and I look forward to seeing more work from the authors in this area. For the purposes of the ICLR workshop track, which seeks to emphasize late-breaking developments, very novel ideas and position papers, I assess this as not appropriate.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
rJnjwsYde
Variational Reference Priors
[ "Eric Nalisnick", "Padhraic Smyth" ]
In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior.
[ "flat priors", "reference", "modern probabilistic learning", "automatic inference", "bayesian models", "informative priors", "costly", "consequence", "hopes" ]
https://openreview.net/pdf?id=rJnjwsYde
https://openreview.net/forum?id=rJnjwsYde
ICLR.cc/2017/workshop
2017
{ "note_id": [ "r1ZMuFaog", "rk0lV27il", "HJkyXKNjx", "SJG51Nfsl", "rkEy9ugix" ], "note_type": [ "comment", "comment", "comment", "official_review", "official_review" ], "note_created": [ 1490028552615, 1489384437793, 1489437398715, 1489285002328, 1489172956366 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "~Eric_Nalisnick1" ], [ "~Eric_Nalisnick1" ], [ "ICLR.cc/2017/workshop/paper12/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper12/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for your thoughtful comments, Reviewer #1. We, in general, agree with your assessment. Below are a few responses and comments.\\n\\n1. Indeed, the step from the 1d models to the VAE is large. We left out discussion of some intermediate models (ex: Gaussian mixtures) because we wanted to include the VAE result, which we thought would be of more interest to the ICLR community.\\n\\n2. On whether the VAE result is trustworthy: assuming a euclidean latent space, the VAE's true reference prior is a function that approaches infinity at the domain's extremes. Our reference prior approximation clearly exhibits these characteristics, and therefore we think it's extremely unlikely that optimization is finding some pathological or unrepresentative solution.\\n\\n3. On scale-invariance: reference priors are usually identifiable only up to proportionality; so yes, they are scale invariant. Actually, our method allows the user to sidestep these problems with the reference prior (ex: inability to be normalized) because we can learn an approximation that is well-behaved, a proper distribution, etc. \\n\\nThanks again,\\nEric\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for your attentive comments, Reviewer #2.\", \"on_estimator_variance\": \"the estimator does have high variance, but it is not as bad as the harmonic mean estimator's, to which I believe you're referring. When using a finite negative value for alpha, the estimator becomes very similar to the harmonic mean (but exponentiated), and this is why we use the VR-max estimate instead. We found learning to be stable when in less that 20 dimensions.\\n\\nDoes the reference prior yield a better density model than the spherical Gaussian one?: preliminary experiments were inconclusive. The reference prior resulted in a better model for 25d but worse in 2d, but in each case the difference was slight, <0.1 .\\n\\nIn the VAE experiment, what keeps the prior from expanding to be infinitely broad?: firstly, the neural network sampler must have finite weights, resulting in the prior having finite domain. Secondly, if the decoder network uses units that can saturate, the prior will stop expanding once the downstream activations becomes sufficiently large.\\n\\nIs the true reference prior guaranteed to be proper? What happens if it is improper?: it most likely won't be proper, which is a benefit of our methodology since it allows us to find an approximation that is proper (or has other properties the user desires). Yet, MCMC usually still works for improper posteriors though: http://stats.stackexchange.com/questions/211917/sampling-from-an-improper-distribution-using-mcmc-and-otherwise\\n\\nThanks again,\\nEric\"}", "{\"title\": \"Interesting approach to learning priors for generative models\", \"rating\": \"7: Good paper, accept\", \"review\": \"This extended abstract proposes an interesting method to learn a reference prior distribution using a variational formulation with the reparameterization trick. There is a need for this sort of work, since the generic prior distributions commonly used in VAEs and GANs are somewhat unsatisfying. The idea of learning a reference prior is interesting, and I haven\\u2019t seen it discussed in the context of deep generative models.\\n\\nThe contributions and experiments seem sufficient for a workshop paper, so I would recommend acceptance. \\n\\nI\\u2019m a little concerned about the variance of the estimates in Eqn. (3); this resembles the likelihood weighting based estimate of p(D), which can have extremely large, or even infinite variance. Are the estimates stable?\\n\\nThe VAE example is interesting. What can we learn from the shape of the learned prior? Does the bimodal structure imply the distribution is multimodal? Does the reference prior yield a better density model than the spherical Gaussian one?\\n\\nIn the VAE experiment, what keeps the prior from expanding to be infinitely broad? Is the true reference prior guaranteed to be proper? What happens if it is improper?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting and original proposal to bring (uninformative) reference priors to deep generative models. This work might contribute an important and original argument for the discussion of how to choose priors for deep generative models.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper is well written and presents a novel variational approach to find approximate reference priors for arbitrary models. The derivation of their method is clear and easy to follow. In the experimental section, the authors first show that their method recovers the well known Jeffreys prior for 1-dimensional toy models with high accuracy. They then show that they can also find an approximate reference prior for a VAE model with 2-dimensional latent space: This prior is significantly different from the widely used isotropic Gaussian, e.g., is is multimodal. This could be a significant result and might contribute important arguments for the discussion of how to choose priors and how to choose the model structure for deep generative models. Unfortunately, and probably due to the 3 page constraint for this workshop, I\\u2019m not convinced that these results are 100% trustworthy: The step from 1d models, where the proposed method works as expected, to VAE-style latent variable models seems rather big and I can imagine various ways how the optimization might fail and produce misleading results. Additional results for models of intermediate complexity and more details/diagnostics could greatly enhance this paper (but would probably break the 3 page limit). I\\u2019m also wondering whether there is a scale-invariance / degeneracy in the model: Scaling the mean/stddev. of the prior and posterior by a constant factor should result in an equivalent model.\\n\\n\\n\\nNevertheless, I think this is very interesting work which has the potential to initiate a new discussion about priors for generative models.\", \"pro\": [\"original approach; well motivated\", \"experiments show the method works on 1d toy models\", \"the result for the 2-dimensional VAE is surprising and might form some kind of argument for future work on latent variable models -> potential high impact.\"], \"con\": [\"weak experimental section: I\\u2019m not convinced that the result for the 2d VAE is trustworthy.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJlj8CNYl
Unsupervised Motion Flow estimation by Generative Adversarial Networks
[ "Stefano Alletto", "Luca Rigazio" ]
In this paper we address the challenging problem of unsupervised motion flow estimation. Under the assumption that image reconstruction is a super-set of the motion flow estimation problem, we train a convolutional neural network to interpolate adjacent video frames and then compute the motion flow via region-based sensitivity analysis by backpropagation. We postulate that better interpolations should result in better motion flow estimation. We then leverage the modeling power of energy-based generative adversarial networks (EbGAN's) to improve interpolations over standard L2 loss. Preliminary experiments on the KITTI database confirm that better interpolations from EbGAN's significantly improve motion flow estimation compared to both hand-crafted features and deep networks relying on standard losses such as L2.
[ "Computer vision", "Deep learning", "Unsupervised Learning" ]
https://openreview.net/pdf?id=SJlj8CNYl
https://openreview.net/forum?id=SJlj8CNYl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "ryKBGigsg", "H1yB628ig", "r1g3rdY6sx" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489183297015, 1489583414714, 1490028612379 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper121/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper121/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Official review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes an unsupervised learning approach to image matching. The authors train a deep network for video frame interpolation, and use the trained model to infer the correspondences between frames with backpropagation-based sensitivity analysis. The authors show that adversarial training of the interpolation network improves the accuracy of the predicted matches.\\n\\nThis general approach to learning to match images has been introduced by (Long et al., ECCV 2016). The contribution of the paper is in adding adversarial loss to the method and showing it improves the quality of the predicted matches.\\n\\nThe paper is written clearly, contains novel and fairly interesting results.\", \"pros\": [\"The fact that adversarial training on image interpolation indirectly improves the quality of the matches (~10% relative improvement in accuracy@5, ~20% relative decrease in EPE) is interesting.\", \"The method is using a somewhat non-standard GAN formulation based on EbGAN. It is not clear if this formulation is advantageous, though\", \"The method is compared to relevant baselines\"], \"cons\": [\"Limited novelty: \\\"take an existing method and add a GAN\\\" is not a very original approach\", \"The results of the method are on par with (Long et al., ECCV 2016) and worse than another unsupervised method by (Yu et al., arxiv 2016)\", \"I am not sure how to calibrate my score for the workshop track, so please take the rating with a grain of salt. This is not a bad paper, but I don't see \\\"very novel ideas\\\" in it.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes a GAN architecture that given frames t,t+2 interpolates to find t+1, building upon the method of Long et al for optical flow estimation through frame interpolation, by adding a discriminator to the output image. However, it does not compare against Long et al., so we do not know at the end, if adding the adversarial network helps. If the authors could clarify that, it would be important for the paper. My other note would be for them to provide one paragraph describing the method of Long et al a bit more in detail, as now someone needs to read Long's paper to get the full picture.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
rknkNR7Ke
Trace Norm Regularised Deep Multi-Task Learning
[ "Yongxin Yang", "Timothy M. Hospedales" ]
We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way.
[ "parameters", "trace norm", "deep", "learning", "framework", "multiple neural networks", "models", "tensor trace norm", "neural network", "others" ]
https://openreview.net/pdf?id=rknkNR7Ke
https://openreview.net/forum?id=rknkNR7Ke
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rJSX_K6ox", "S14afRlix", "rkJy3vwsx" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028572685, 1489195707882, 1489628118951 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper54/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper54/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Interesting idea but weak results (for now)\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors introduce the idea to train multi-task networks by constructing separate networks for different tasks and then putting a limit on the tensor-norm on shareable layers. In this ways it's not needed to explicitly design sharing (but different tasks still need to share the architecture). The proposed tensor losses are not differentiable, so to optimize them with SGD during training the author use sub-gradient descent. These are certainly interesting ideas which warrant acceptance. The presented results, improving accuracy on Omniglot from about 34% to about 36% are very weak though, considering that SOTA for deep learning (but using metric learning) is above 90% (e.g. from matching networks). Or is this not a fair comparison? In any case, the paper certainly warrants workshop acceptance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"## Quality:\\nThis is an interesting paper presenting a cute idea. It reads a bit like a \\\"this didn't work as well\\\"-alternative approach to its sister paper implementing the same idea with Tensor Factorisation, which is on ICLR main conference: https://openreview.net/pdf?id=SkhU2fcll\\n## Clarity:\\nVery clear, well-written.\\n## Significance:\\nThis seems the biggest problem. The results are quite weak, clearly inferior to the sister paper.\\nMost importantly, why is the baseline omniglot STL accuracy around 0.34 while in the sister paper the accuracy for the same baseline appears to be around 0.65 in Fig4 top left? Am I missing something here?\\nIn any case I believe that there should be a comparison against normal explicit sharing of the weights as baseline, which is easy to add in the plots.\", \"apart_from_that_there_is_some_smaller_remarks_that_impact_signficance\": \"+ there must be a lot of computational overhead to compute an SVD on each weight layer, which I assume needs to be computed after every weight update? What was the additional compute time?\\n+ The number of parameters is still the same as if these networks were trained independently, so parameter reduction is one advantage of hard explicit sharing which falls away here.\\n\\n## Other remarks:\\nA relevant application of MTL is multilingual acoustic model training in speech.\\nSee eg Scanzio et al 2008 (https://scholar.google.com/scholar?cites=2941155962830961778), which has all but the last layers shared, and Sercu et al 2015 (https://arxiv.org/abs/1509.08967) which is a CNN-based model and has multiple FC layers split.\\n\\nOverall, PRO: cute idea, novel, well-written paper. CON: a bit too similar to sister paper on main track, weak results (and please clarify the difference in baseline?)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1QXQkSYg
Out-of-class novelty generation: an experimental foundation
[ "Mehdi Cherti", "Balázs Kégl", "Akın Kazakçı" ]
Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty.
[ "Deep learning", "Unsupervised Learning" ]
https://openreview.net/pdf?id=r1QXQkSYg
https://openreview.net/forum?id=r1QXQkSYg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SkUcM-bjl", "HJIEbENog", "H1w-b4Esg", "rJf7vaeie", "SySL_Kpjl" ], "note_type": [ "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489207949881, 1489416494485, 1489416446938, 1489192729762, 1490028621029 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper134/AnonReviewer1" ], [ "~mehdi_cherti1" ], [ "~mehdi_cherti1" ], [ "ICLR.cc/2017/workshop/paper134/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"rating\": \"7: Good paper, accept\", \"review\": \"This paper attempts to formalize a notion of creativity in generative models. The idea is to see if a generative model trained on one dataset can be used to generate novel samples that resemble elements of another dataset. In this case, it is examined whether a generative model trained on digits could be used to generate samples that look like alphabetical characters. Several metrics for determining the alphabetical nature of the generated samples are given; this is used as a proxy for novelty. It is shown that these can be useful in choosing models that generate novel samples outside of the classes the model was initially trained on.\\n\\nI can agree with the premise that when it comes to out-of-class novelty, likelihood is probably not a good measure since it will penalize models that generate samples that are too far outside of the data distribution. However, I'm not yet convinced that the conclusions drawn here would generalize beyond the specific examples given in the paper. It would be good in a future iteration to see this same analysis on another dataset, or perhaps even to reverse the existing experiment (train on alphabetical characters, evaluate on digits). Another possibility would be to test on several different alphabets, like those found in Omniglot.\\n\\nAlthough I think this particular analysis is limited (it is a workshop submission), I do think it proposes an interesting direction for measuring the novelty of samples from a generative model. I could see this being a potentially useful direction for measuring interesting properties of generative models in terms of creativity.\\n\\nHow are the panagrams (a)-(d) generated? Are letters chosen based on Euclidean distance to some reference characters?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Answer\", \"comment\": \"Thank you for your comments and suggestions.\\nWe are working on a more detailed analysis to understand under \\nwhich conditions we obtain a model that generates novelty.\"}", "{\"title\": \"Answer\", \"comment\": \"Thank you for your comments and suggestions. We definitely want\\nto redo the same experiments and analysis on other settings or \\ndatasets like Omniglot for which the availability of a large\\nnumber of classes will be helpful.\\nRegarding your question about how the pangrams were generated,\\nwe took the set of images generated by a given model, then\\nwe selected manually one character from the top 16 in every letter, \\nwhere the top 16 was selected automatically according to the predicted\\nprobability of the letter according to the discriminator which was \\ntrained on digits and letters.\"}", "{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper attempts to formalize the notion of 'computational creativity' from a machine learning perspective, in order for machine learning researchers to make better progress on this problem. In particular, the authors propose measuring the 'computational creativity' of a model by several metrics intending to capture whether the model can generate new objects from classes unseen during training.\\n\\nI think this is an interesting paper and a good first step in this area. Indeed, absent proper definitions and metrics for vague concepts such as 'creativity', it is difficult to make progress on related computational problems. While the proposed metrics are not perfect,* they seem reasonable enough to warrant future investigation, and thus I think this paper is worthy of acceptance as an ICLR workshop paper.\\n\\n*Further thoughts: I'm not convinced that these metrics are selecting for the \\\"right\\\" models from a creativity point of view. If Figure 1 is really a random sample of digits generated by one of the 'most creative' models according to the proposed metrics, it seems like it is mostly just good at capturing lower-level correlations in the data, while generating random high-level details. Thus it seems like a 'creative' model is one that has been artificially limited in order to poorly model high-level features of the data. This seems intuitively to contrast with creativity as we perceive it in humans -- creative humans are still capable of modeling the world around them, they are just able to combine what they've learned in new and interesting ways. Perhaps 'true creativity' is out of the reach of current generative models? (Or, perhaps the word 'creativity' is not really meaningful from a computational perspective?) However, I'm not an expert in this area, and I still think the idea is worthwhile presenting, as it may generate interesting discussions. In future work, I'd like to see a more thorough analysis of what model settings lead to the most 'creative behaviour' according to these metrics.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
rkdF0ZNKl
Fast Generation for Convolutional Autoregressive Models
[ "Prajit Ramachandran", "Tom Le Paine", "Pooya Khorrami", "Mohammad Babaeizadeh", "Shiyu Chang", "Yang Zhang", "Mark A. Hasegawa-Johnson", "Roy H. Campbell", "Thomas S. Huang" ]
Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively.
[ "Deep learning", "Applications" ]
https://openreview.net/pdf?id=rkdF0ZNKl
https://openreview.net/forum?id=rkdF0ZNKl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B1C8tvgil", "rkO7_Ypjx", "rkpqv3xoe" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489168726329, 1490028575959, 1489188757381 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper62/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper62/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Very simple idea, but likely to be used\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes a simple technique for speeding up generation from Convolutional Autoregressive Models (e.g., WaveNet and PixelCNN). The key observation is that if one naively generates each output from scratch without re-using any computation, then it is wasteful. The paper instead proposes to cache hidden state values across the generation of all the outputs that share the intermediate results. Experimentally the paper shows large speedups over the naive approach when the depth of a WaveNet is increased to 13+ layers and PixelCNN++ when the batch size is large.\\n\\nOverall the paper is clear, and the approach is a clear improvement over the naive version. One question I have, though, is if it wouldn't be simpler to just build a TensorFlow model that generates an entire output at once. That is, instead of building a TensorFlow model that generates the next pixel and then calling this model repeatedly, would it be possible to define a TensorFlow model that outputs a full image? (To deal with having to sample output values, the Gumbel-max trick could be used with all of the randomness needed supplied as an input). Then presumably the TensorFlow execution model would take care of all the necessary caching.\\n\\nA second question is about the relevance of the technique in the WaveNet experiments. The headline improvement of 21x doesn't happen until there are 15 layers in the WaveNet. Is this a useful parameter regime for the model?\", \"pros\": [\"This is a clearly better method than the naive approach, and the naive approach does appear to have been used before\", \"The idea is simple and clearly explained\", \"The authors are open-sourcing their implementation, which will likely be used by a number of people in the ICLR audience\"], \"cons\": [\"It's not obvious to me that this is the simplest way to implement the idea\", \"The idea is very simple, effectively being \\\"cache in the obvious way\\\"\", \"Overall I'd lean towards accepting, but I wouldn't fight strongly for it.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"simple and good\", \"rating\": \"7: Good paper, accept\", \"review\": \"This is a nice workshop paper. its a simple idea but people will be interested in it. If nothing else, the released code is valuable, and having the poster to advertise it is a good use of workshop poster space.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJgabgBFl
A Quantitative Measure of Generative Adversarial Network Distributions
[ "Dan Hendrycks*", "Steven Basart*" ]
We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets.
[ "measure", "quantitative measure", "gans", "new measure", "quality", "distributions", "generative adversarial networks", "divergence", "image" ]
https://openreview.net/pdf?id=SJgabgBFl
https://openreview.net/forum?id=SJgabgBFl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJ4-Knvox", "r1HN1EQox", "r1RvY2vol", "HJx5P_Kpjl", "B1E6xogig", "SkOrK3Psx" ], "note_type": [ "comment", "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489647868095, 1489350444578, 1489647974486, 1490028642289, 1489182908558, 1489647935874 ], "note_signatures": [ [ "~Steven_Basart1" ], [ "ICLR.cc/2017/workshop/paper162/AnonReviewer1" ], [ "~Steven_Basart1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper162/AnonReviewer2" ], [ "~Steven_Basart1" ] ], "structured_content_str": [ "{\"title\": \"Update\", \"comment\": \"Thanks to the reviewers\\u2019 comments, we have updated our draft.\\nNow we include Parzen window estimates, and we show that reducing the image to the primary PCA coefficients does not fix Parzen window estimates. We hope that this added analysis addresses much of our reviewers\\u2019 concerns.\"}", "{\"title\": \"Review\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper is addressing the important problem of evaluating the distributions learnt by GANs. In the proposed approach, first a PCA is applied to the samples of real and generated images and then the distributions are approximated by GMMs on the principal components. The KL between these two GMMs does not have a closed form, so a nearest neighboring approximation is used.\\n\\nIn general, I find this approach very similar to Parzen window estimate. Given that Parzen window estimate is flawed in evaluating the likelihood of generative models (see Theis et al., 2015), I don't see why a simple linear transformation before GMM approximation would solve this problem. Especially, the KL divergence approximation of GMMs does not seem like a good approximation, and is essentially computing the nearest neighbor within the two sets of images.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your careful analysis of our paper. We initially left out a comparison to Parzen window estimates as we believed it was known to be a poor measure in high dimensions (Theis et al.), but due to its prevalence, you are right in saying it should be included.\\n\\nTo that end, we have updated our paper by running Parzen window estimates on the CIFAR-10 dataset which, by way of its high-dimensionality, most closely reflects real world data. Parzen windows did not track CIFAR-10 image quality. Moreover, we now show that it is not PCA compression that allows our method to work. We demonstrate this by embedding each sample with its the primary PCA components, and then we use Parzen window estimates on this compressed PCA coefficient embedding. Here Parzen windows estimation still fails as a measure of image quality.\\n\\nWe do not think that it is a trivial task to construct a metric that corresponds to image quality and correlates with diversity of samples, otherwise the measure would already exist and be used in GAN research.\\n\\nThank you again for reviewing our paper. We hope to have addressed your primary concern, and please let us know if there are any other concerns or suggestions for improving our paper.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"failure to relate to parzen window and other generative modeling evaluation metrics\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper addresses the problem of quantitatively evaluating GANs (or any generative model for which samples can be drawn). They propose to build a gaussian mixture model approx. to the generators distribution and the empirical data distribution by fitting a single gaussian to each point of the respective distributions (and having equal mixing proportions). Rather than using each image as the gaussian mean, they use the vector of the first k principle components. To compare the generative and empirical distributions they compute the min KL distribution between each pair of gaussians and same the expectation over all mixture components.\", \"pros\": \"The main advantage I see of this approach over other nearest neighbor based approaches (namely parzen window estimates) is that by running PCA on the image and then computing distance between images in this reduced dimensional space, some of the issue of high dimensional spaces can be alleviated.\", \"cons\": \"This paper does not mention parzen window estimates, which have long been used as a measure of generative modeling quality (specifically an approximation to the likelihood of held out data under the generative models distribution). Parzen window estimates are also known to be very ineffective in high dimensional image spaces and the authors do not mention this at all. Potentially their approach is better because of the dimensionality reduction of PCA but the authors do no mention this. More generally, the authors don't compare their metric against anything else! Their only experiment show that their metric correlates with image quality on a couple datasets. This is not very interesting since one could construct a number of metrics based on nearest neighbor based approaches that correlate with image samples but that doesn't mean they are better than anything that exists.\", \"summary\": \"Overall, I think the idea is potentially useful, but the authors have not shown either empirically or theoretically that this metric is any better than existing approaches. Furthermore, they don't even mention related approaches, namely the closely related parzen window estimate method. As such, I think there is potential if revised, but don't think the paper can be accepted in its current form.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review of our paper. We now realize that we should have made a clearer distinction between our work and that Parzen window estimation, and we have updated the paper to delineate between them. The main difference is that we compare distributions rather than tally the average quality of generated examples. This is reflected in our choice of the KL divergence rather than the average log-likelihood. We show that Parzen windows does not track image quality in CIFAR-10. Moreover, we have also updated our paper to show that doing \\u201ca simple linear transformation before GMM approximation \\u201d does not solve the problem either.\\nWe appreciate your feedback, as it made us compare our measure with a prominent technique. If there are any other concerns or criticisms, let us know. Thank you again for reviewing our paper.\"}" ] }
S1-6egSFl
Unsupervised and Scalable Algorithm for Learning Node Representations
[ "Tiago Pimentel", "Adriano Veloso", "Nivio Ziviani" ]
Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets.
[ "Unsupervised Learning" ]
https://openreview.net/pdf?id=S1-6egSFl
https://openreview.net/forum?id=S1-6egSFl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hk6bxt4ie", "rywV8Flie", "ryDPuF6ig", "S1uM0LVjx", "BJY1_1Vse" ], "note_type": [ "comment", "official_review", "comment", "comment", "official_review" ], "note_created": [ 1489436676817, 1489176110741, 1490028639144, 1489427983626, 1489397729102 ], "note_signatures": [ [ "~Tiago_Pimentel1" ], [ "ICLR.cc/2017/workshop/paper158/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ], [ "~Tiago_Pimentel1" ], [ "ICLR.cc/2017/workshop/paper158/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Updated paper and clarifications\", \"comment\": \"Thank you for your feedback. I think this ('state-of-the-art link prediction') was indeed stated poorly from our part, I've updated the paper's abstract, now saying that NBNE achieves results comparable or better than state-of-the-art feature learning algorithms. Instead of specifically stating state-of-the-art at the tasks themselves.\\n\\nWe compare our algorithm to the baselines in these two problems, i.e. node classification and link prediction, because it\\u2019s the usual benchmark when comparing node embedding algorithms. These problems are used for comparisons in Node2Vec (Grover & Leskovec, 2016) and SBNE (Wang et al., 2016), while DeepWalk (Perozzi et al., 2014) and LINE (Tang et al., 2015) evaluate using node classification only.\\n\\nAll these four methods, and NBNE, are supposed to generate general purpose embeddings, so they are not, nor should be, explicitly optimized for any such test. These chosen tests in tasks with different properties and in different datasets are mainly supposed to 'benchmark' the algorithm.\\n\\nREFERENCES\\n\\nAditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855\\u2013864, 2016.\\n\\nBryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701\\u2013710, 2014.\\n\\nJian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pp. 1067\\u20131077, 2015.\\n\\nDaixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1225\\u20131234, 2016.\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"Essentially the goal of the contribution is to adapt ideas from Word2Vec to learn node embeddings. I.e., like Node2Vec but borrowing ideas from SkipGrams rather than random walks. This is claimed to lead to faster training times and more general-purpose embeddings.\\n\\nThe basic idea is to form \\\"sentences\\\" based on random permutations of neighbors around some node, so that the ideas from Word2Vec can be adopted. This idea is relatively straightforward and perhaps a little ad-hoc, but makes sense.\\n\\nThe experiments on a few graphs show improvements on link prediction tasks. These are fine though it's not clear to me whether state-of-the-art link prediction methods are in fact similar to what's being shown, nor is this the task the methods being compared are optimized for. Some more thoroughness would be useful here, though what's shown is sufficient for a workshop paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Updated paper and clarifications\", \"comment\": \"Thank you for your feedback, I\\u2019m uploading a revised version of the paper which, I think, better describes the way sentences are generated. We would like to point out that, besides having a lower training time, our method is completely unsupervised, while node2vec is semi-supervised. Our method also only depends on a single parameter \\u2018n\\u2018, which is easier to understand and choose and which can be selected dynamically, by increasing its value until the embeddings start to overfit.\\n\\nAnother point we would like to make is that choosing how sentences/context are generated in a graph is a fairly complex problem, due to the changing dimensionality in its structure. Unlike text or image, there\\u2019s no straight forward way to \\u2018read\\u2018 it. Also, differences like the one between SkipGram and CBOW are \\u2018simple\\u2018, since they only change how one word is predicted from the others in already constructed sentences, but create fairly different representations and results, being more efficient when applied to different datasets.\\n\\nThere was no space to fully state the differences in training time between our method and the baselines, but it was about 100 to 1000x faster than node2vec, when using n=1, n=5 or n=10 on the three datasets (respectively: Astro, Facebook and Blog).\\n\\nAbout testing against different baselines, to the best of our knowledge, there\\u2019s no supervised method for learning representations specific for neither link prediction or node classification. Grover & Leskovec (2016) state that \\u201dnone of feature learning algorithms have been previously used for link prediction\\u201d. In it, they additionally test their algorithm against common heuristics of the problem, like Common Neighbours and Adamir Adar, strongly beating those baselines. Due to the lack of space in this workshop paper version, we found it was not necessary to compare against these weak baselines.\\n\\nMost supervised learning algorithms for node classification/link prediction we found use, besides structural knowledge from the graph, node attributes, like sex, age, etc, which we do not use. We compare our algorithm to theirs in these two problems, i.e. node classification and link prediction, because it\\u2019s the usual benchmark when comparing node embedding algorithms. These problems are used for comparisons in Node2Vec (Grover & Leskovec, 2016) and SBNE (Wang et al., 2016), while DeepWalk (Perozzi et al., 2014) and LINE (Tang et al., 2015) evaluate using node classification only.\\n\\nREFERENCES\\n\\nAditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855\\u2013864, 2016.\\n\\nBryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701\\u2013710, 2014.\\n\\nJian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pp. 1067\\u20131077, 2015.\\n\\nDaixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1225\\u20131234, 2016.\"}", "{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes a new method for computing nodes representations in large graphs. The idea is very close to the ideas of other existing papers and consists in transforming nodes+neighbors into sentences, and then to learn a word2vec model on the generated sentences. The originality of the paper is in the way these sentences are generated, using random permutations of nodes. Experimental results are made on both link prediction and node classification problems and show competitive results w.r.t baselines.\\n\\nThe originality of the approach is quite limited since the only new thing is how the sentences are generated. Moreover, due to the lack of details, I am not sure to exactly understand how the sentes are generated. Adding an example would be nice. The model seems competitive with other unsupervised methods and with a lower training time which is interesting. But comparisons could be done with supervised methods that have been already proposed, particularly for learning representations for node classification.\", \"pros\": \"\\u2022\\tSimple idea\\n\\u2022\\tLow training time\", \"cons\": \"\\u2022\\tNot a string contribution\\n\\u2022\\tIncomplete Experimental setting\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1PMaa1Yg
Exploring loss function topology with cyclical learning rates
[ "Leslie N. Smith", "Nicholay Topin" ]
We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates.
[ "Deep learning" ]
https://openreview.net/pdf?id=H1PMaa1Yg
https://openreview.net/forum?id=H1PMaa1Yg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "S1i9J52cx", "Hku-l_lse", "HJ_Rv84jx", "B1XMuFpjl" ], "note_type": [ "official_review", "official_review", "comment", "comment" ], "note_created": [ 1488916370729, 1489170432487, 1489426384527, 1490028555013 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper19/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper19/AnonReviewer1" ], [ "~Leslie_N_Smith1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Official Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This work presents a series of observations gleaned from training a ResNet at different learning rates and schedules. While in general this sort of empirical analysis is a good thing, the paper does not put forward any novel explanation or theory based on these observations. Overall the paper is reasonably well written but lacks clear motivation or take-aways. The techniques in this paper are not novel but the analysis is interesting. I would recommend rejection at this time but encourage the authors to see if they can further explore possible insights their experiments may have uncovered.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting phenomena, but more experiments needed to rule out less interesting explanations\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper discusses several interesting phenomena regarding the training and testing error curves over the course of training deep network models on image classification tasks. Among the findings are that test error performance can be nonmonotonic with certain learning rates, and imposing a cyclic alternation between low and high learning rates can speed learning.\\n\\n-While these results may point to something deeper, additional control experiments would greatly strengthen the paper. The finding that a cyclic learning schedule can speed learning would be potentially of practical interest, but the experiments compare just one particular cyclic scheme to one particular fixed learning rate. Does a carefully optimized fixed learning rate match the cyclic performance? Is a cycle really necessary, or can the learning rate just decrease monotonically over the course of learning?\\n\\n-There may be simple standard explanations of these phenomena. The test error spikes up on each cycle as the learning rate crosses some threshold, which seems a straightforward case of SGD becoming unstable and diverging when the learning rate is made too high. After taking a giant bad step, higher learning rates can make progress because the network is terrible and fine adjustments are not necessary. More is necessary to back up the claim that these results provide insight into the \\\"loss function topology.\\\"\\n\\n+The finding of faster convergence with cyclic learning rate schedules, if it remains faster than the optimal fixed or monotonically decreasing schedule, would be very interesting and merits more investigation.\\n\\n+The suggestion of interpolating many models to yield higher generalization performance is also a potentially interesting direction.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reply to AnonReviewers\", \"comment\": \"We believe the intent of the ICLR workshop is to provide a forum for late-breaking results, even if a paper hasn't been fully developed into what one expects for a conference paper. Our workshop paper is such a paper, providing experimental results that have not been seen before, though it isn't as fully developed as a conference paper.\\n\\nUnfortunately, the 3-page limit meant not showing the control experiments and many of the other results we've obtained. We ran experiments with a variety of learning rate schedules, architectures, solvers, and hyper-parameters. We mentioned some of the other results very briefly in our conclusion but could not include them fully due to space limitations.\\n\\nThe original cyclical learning rate paper (Smith 2015, Smith 2017) discusses that the current scheme was compared with many other cyclic methods and the linear scheme was chosen because the more complex methods provided no additional benefit. Please skim the earlier paper for more details. The purpose of this current paper was not to introduce cyclical learning rate as a practical tool but to show it is also an experimental tool that demonstrates the new phenomena described.\\n\\nRegarding Figure 2a, some simple explanations are possible, and there are certainly many examples in the literature where SGD becomes unstable and diverges. However, to our knowledge, the literature does not show examples where SGD becomes unstable, diverges, and then starts converging (note that the test accuracy falls slightly and recovers quickly), especially while the learning rate continues to increase. This is why we include this as a novel phenomenon. Furthermore, from a geometric perspective, one can imagine that the increasing learning rate causes the solution to jump out of a local minimum and hence the sudden jump but, if so, why would it continue to converge while learning rate increases? We believe these phenomena are unusual and are providing some insight into the loss function topology.\\n\\nIn addition, Figure 1 shows the plots that started our investigation and we don't think your explanation holds for this example. These plots show test accuracy during regular training (not using cyclical learning rates), so the learning rate is monotonically decreasing. Furthermore, the dip in test accuracy happens for an initial learning rate of 0.14 but not for 0.24 or 0.35.\\n\\nRegarding Figure 2b, it does show the cyclical learning rate result compared to an optimal monotonically decreasing schedule. The point is that within 20,000 iterations it produced a better solution than the optimal schedule could in 80,000 - 100,000 iterations. We also feel it is interesting that such high performance is possible when the smallest value used for the learning rate is 0.1, which is commonly considered large.\\n\\nAs we say in the Conclusions, we are actively searching for a collaborator who can provide a theoretical analysis for a full follow-up paper. We welcome any readers who feel they understand the theoretical causes for these phenomena to please contact me.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
Sk1OOnNFx
Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli
[ "Christophe Gardella", "Olivier Marre", "Thierry Mora" ]
How to discriminate visual stimuli based on the activity they evoke in sensory neurons is still an open challenge. To measure discriminability power, we search for a neural metric that preserves distances in stimulus space, so that responses to different stimuli are far apart and responses to the same stimulus are close. Here, we show that Restricted Boltzmann Machines (RBMs) provide such a distance-preserving neural metric. Even when learned in a unsupervised way, RBM-based metric can discriminate stimuli with higher resolution than classical metrics.
[ "boltzmann machines", "accurate metric", "retinal responses", "visual stimuli", "neural metric", "visual", "activity", "sensory neurons", "open challenge", "discriminability power" ]
https://openreview.net/pdf?id=Sk1OOnNFx
https://openreview.net/forum?id=Sk1OOnNFx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rJapli7sx", "BJcohY1jg", "SJxHuKTig" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489379524614, 1489112226391, 1490028599719 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper101/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper101/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"nice method and analysis\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This paper proposes using the hidden units of an RBM to compute a metric of the similarity of neural responses to different stimuli. It seems like a sensible idea - to compare population activity in a latent space defined by the statistics rather than in the raw spike data - but it could use more explicit motivation rather than relying on the reader to come up with this for themselves. The proposed method exhibits good performance compared to other methods in discriminating spike trains in a meaningful way that is related to stimulus changes.\\n\\nSome related work using RBM's to model spike trains:\\nK\\u00f6ster, Urs, et al. \\\"Modeling higher-order correlations within cortical microcolumns.\\\" PLoS Comput Biol 10.7 (2014): e1003684.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"ok use of rbm for spike train metrics\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"pro:\", \"overall it\\u2019s a sensible approach and seems to be a reasonable first step towards a deep learning spike train metric\", \"spike train metrics are a topic of some interest in the neuroscience community\", \"the authors understand the relevant literature and have cited it.\"], \"con\": [\"there is not much here in the way of novelty that will be of interest to the ICLR community\", \"the writing would benefit from a thorough edit for grammar and style.\", \"the layout of the experiments is not entirely clear; specifically, is there any training/validation/test data split, or are all the results training data only?\", \"In short, a sensible idea and something that should at some point grow into a published work. I am marginally positive.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
Syh_o0pPx
CommAI: Evaluating the first steps towards a useful general AI
[ "Marco Baroni", "Armand Joulin", "Allan Jabri", "Germàn Kruszewski", "Angeliki Lazaridou", "Klemen Simonic", "Tomas Mikolov" ]
With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum.
[ "Theory", "Natural language processing", "Reinforcement Learning", "Transfer Learning" ]
https://openreview.net/pdf?id=Syh_o0pPx
https://openreview.net/forum?id=Syh_o0pPx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hk9M9Stix", "Byewaz2cx", "rkl9cjs9e", "H1pG2k-sg", "HJJUXrwie", "SyP8eGwje", "BkDRV09qx", "Hy4pclAce", "ryoRsTh9l", "SJjGPlbje", "rJKbBmYsg", "rJrqi2gig", "Hya-dKTsx", "HkY_ubTqx" ], "note_type": [ "comment", "comment", "comment", "comment", "comment", "official_review", "comment", "comment", "comment", "comment", "comment", "official_review", "comment", "comment" ], "note_created": [ 1489750546248, 1488887128101, 1488857736042, 1489202196764, 1489617734862, 1489604687252, 1488803022763, 1489009339621, 1488931794624, 1489205011448, 1489741056682, 1489189773265, 1490028548624, 1488947313262 ], "note_signatures": [ [ "~Marco_Baroni1" ], [ "~Marcus_Abundis1" ], [ "~Marco_Baroni1" ], [ "~Marco_Baroni1" ], [ "~Marco_Baroni1" ], [ "ICLR.cc/2017/workshop/paper2/AnonReviewer1" ], [ "~Marcus_Abundis1" ], [ "~Marco_Baroni1" ], [ "~Marco_Baroni1" ], [ "~Marco_Baroni1" ], [ "~Marcus_Abundis1" ], [ "ICLR.cc/2017/workshop/paper2/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Marcus_Abundis1" ] ], "structured_content_str": [ "{\"title\": \"thanks\", \"comment\": \"Thanks for your further comments and the pointer. I agree that it's difficult to continue this conversation on the forum: let's hope well' have chances to discuss these topics in person!\"}", "{\"title\": \"Intuition . . .\", \"comment\": \"Dear Marco, Thank you for your note.\\n\\nI am struck by your reply of an \\u2018intuitive notion of useful AI\\u2019, as intuition must precede firm models. Still, can you offer more detail on that intuition? I reread the material and I am left mostly with (I paraphrase) \\u2018learning the use of language is important\\u2019. I ask for more detail as I wonder how precise, well-formed, or extensible that intuition is \\u2013 my own interpretation here feels a bit superficial.\\n\\nAn intriguing part of the CommAI environment is that it may target what I call a \\u2018universal grammar\\u2019 for machine learning. This, by itself, is interesting. Similarly, Shannon\\u2019s signal entropy gave objective structure our sense of \\u2018information\\u2019, and still underlies many modern advances. But this also led to \\u2018bizarre and unsatisfying\\u2019 (Shannon & Weaver, 1949) views of information. It would be sad to see \\u2018bizarre and unsatisfying\\u2019 aspects perpetuated \\u2013 with that mistake now made for \\u2018intelligence\\u2019, as occurred with \\u2018information\\u2019.\\n\\nFor example, the CommAI environment is essentially semiotic, focusing on syntactical tasks but excluding semantic aspects (the *functional value* one might ascribe to a banana versus an apple, or to bananas and apples of different types). The later would need to be included if one wishes to call the system truly intelligent, no? I assume we both target a *true* general intelligence, so this seems like a critical matter. Do you have thoughts on treating syntactic/semantic differences?\\n\\nOther parts of the proposal I find similarly bothersome . . . \\n\\u2018. . . important to instruct the machine in new domains\\u2019 (sec 1). \\nWhy is the system (ultimately) not set to posit, reveal, or articulate new domains, in an intelligent manner. For example, in a backward looking way, we might ask \\u2018How might a wheel be invented, from a set of previously existing elements?\\u2019 The developmental steps involved could then be mapped, as a bottoms-up model. Discrete domain mapping (top down) is needed, but at the risk of leaving explanatory gaps between domains? Enough 'local domains' may eventually be mapped that a synthesizing (general) view can be attempted, but at some far distant point. Bottoms-up modeling \\u2018forces the gap issue\\u2019 by requiring first principles (where possible) that close said gaps. \\n\\nAlso (perhaps trivial?), a \\u2018. . . common bit-level interface\\u2019 is variously referenced in the material, which puzzles me. A bit-to-bit role seems to imply something coded in machine language, rather than programs that pass through a conversion (compiling) process. The later is innately indirect. Even in processor design, subtle bit-to-bit differences are known to exist across architectures that affect processor outputs. This leads me to wonder if you mean something specific (or figurative) in pointing to bit-to-bit relationships?\\n\\nLastly, I see the proposal as circumspect in any claims on what is possible with a CommAI environment, so I do not wish to force a defensive position. I merely hope to better grasp the group\\u2019s thinking on this challenging topic. Thank you, in advance, for your reply.\\n\\nShannon, C., & Weaver, W. (1949). Advances in \\u2018a mathematical theory of communication\\u2019. Urbana, IL: University of Illinois Press.\"}", "{\"title\": \"Thanks for your thoughts\", \"comment\": \"We are proposing only one possible approach to General AI, and we would like to see other proposals that take alternative routes, such as the \\\"bottom-up\\\" one you are presenting (thanks for the pointer).\\n\\nIt's also true that CommAI is a \\\"top down\\\" approach, but based on an intuitive notion of useful AI, rather than general mathematical or psychological considerations.\"}", "{\"title\": \"thanks for your review\", \"comment\": \"Thanks for your review and your open-mindedness regarding top-down vs bottom-up approaches.\\n\\nWe respectfully think that you are seriously underestimating the difficulty of our tasks. Consider that in our setup the learner is only getting one bit at a time, and thus simply discovering that there are recurrent, re-usable patterns of 8-bit sequences that are playing a meaningful role in the definition of the tasks is a big challenge for it. Moreover, the algorithm cannot learn the regexps by example, as it will be exposed to each regexp only once. What the learner needs to do, after it has discovered how to parse the environment strings into their component parts (regexp, order, target string...), is to learn a general way to \\\"compile\\\" the regular expression in order to analyze the string at hand (or to produce one or multiple strings). This is an enormously more difficult task than generalizing a stringset based on a number of examples. All this, with no explicit task segmentation, and very sparse reward, as the learner is only getting reward when it produces the right solution for a task. Note also that, unlike in the Weston paper, our learner is free to generate any sequence of bits (with thus a huge space to explore), rather than having to pick a fixed answer from a list.\\n\\nCompositionality should play a crucial role in the solution. In order to solve what is essentially a continuous stream of 0-shot tasks, the learner must learn to re-use components such as the ability to parse bits into bytes, the ability to parse the instructions into parts, the ability to process and apply regular expressions, and the ability to parse an increasingly richer regexp syntax. Similar abilities would doubtlessly also be learned by, say, natural language parsing, in a setup in which the learner is provided no supervision about what it needs to do in order to get reward, but that would be even more complex.\\n\\nWe are currently experimenting with a set of tasks that are much simpler than the ones presented in the position abstract, using a RNN trained with RL. We are finding that this approach does not go anywhere even in the simplified scenario. We are however not reporting such results in the paper since it's hard to definitely prove a negative result.\"}", "{\"title\": \"language and general AI\", \"comment\": \"Thanks for your interesting comments. We welcome other views on what are the first skills to focus on in the development of general AI, and we hope our position paper, if published, will stir further discussion of this kind.\\n\\nOur reason to focus on language is two-fold. First, we find it hard to conceive that an AI could be useful to human beings if we were not able to communicate with it (to give it instructions and teach it new skills). Language is by far the easiest and most powerful communication tool that humans can use. Second, while our tasks are superficially linguistic in nature, for a system to learn how to handle them from scratch would require very powerful learning to learn capabilities (discovering that certain recurrent sequences are meaningful and thus they should be memorized even in the absence of specific reward, the ability to combine skills learned in simpler tasks in order to address more advanced tasks, the ability to find systematic correspondences between signs--the regexps--and their denotation--the strings, etc.). The minimal setup we are considering should allow researchers to focus on such challenges, rather than on large-scale/noisy data processing issues.\\n\\nWe are definitely not claiming that a system trained on the specific set of CommAI-mini tasks would then be ready to go out in the world and tackle all sorts of advanced tasks, but we realistically think that a learner that was able to solve these tasks without any ad-hoc hand-coded knowledge would be so general that it should be possible to train it, e.g., to have more general conversations with humans. Next, the conversational and linguistic skills could be exploited to teach the machine about the domains of interest (e.g., by instructing the machine to study the Wikipedia), and so on and so forth. We recognize, of course, that we are not there, yet, but we believe that this is an avenue that is worth exploring.\\n\\nGoodAI recently announced a challenge based on our CommAI-mini tasks. We will thus soon be able to ascertain whether there are systems that can solve them, and whether such systems can then scale up to tasks in other domains.\"}", "{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes a new evaluation platform for what they define 'useful' general AI and the desired characteristics for this kind of system.\", \"pro\": \"(Attempts to) Tackle a very important problem, that has yet to be properly formalized or agreed on by the general community. \\nIt's in line with other efforts, such as openai.com/blog/universe, gvgai.net, github.com/deepmind/lab, to bring forward new and diverse tasks for the community to play with. This ultimately pushes us to develop more general learning algorithms that indeed need to \\\"learn to learn\\\" or learn to adapt to different, but related tasks. I think that's something that has been more and more important.\", \"cons\": \"The major problem I have with paper and framework, stems not necessarily from the tasks themselves, but from the identified desiderata. It seems to be <<very>> natural language/text focused. This is an on-going debate whether or not that is a crucial component in the development of general AI and how we will interact with AI. It seems to me, that most of the effort -- at least computationally -- would be spent modelling the particular structure present in text-like inputs/data and that automatically shifts the focus away from what we should be doing, or what the AIs should be trying to figure out which is more complex tasks, more planning and optimization challenging scenarios.\\nWhich brings me to the second point. Say you believe in the desiderata outline, the tasks seem to match what was highlighted in the agenda, but the level of complexity it's relatively low. That's not to say that these are simple tasks for our learning algorithms to pick up, just the complexity doesn't lie in the task per se, but trying to model the language/syntax. I fail to see what succeeding on these tasks, say, in general, about now taking this system and applying it to optimizing energy consumption, or recognizing emotions, or even something domain-related like dialogue systems.\", \"to_sum\": \"I think the paper addresses a very real problem and, as I said previously, I don't think we don't have the/a right answer or even a satisfactory answer at this point in time. I do think though the framework proposed is too limited in scope to claim generality. That being said, it might still be 'useful' -- if you agree with the proposed desiderata, it seems like a sensible set of tasks to try out.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A Few Thoughts . . .\", \"comment\": \"I wish to offer thoughts on your CommAI proposal as a \\u2018top-down view, deriving their requirements from psychological or mathematical considerations\\u2019 (sec. 3). The gist of my note asserts your worthy project is not sufficient to describe or explain \\u2018general intelligence\\u2019. For example, a criticism is that I see no balanced consideration of bottoms-up facets. A purely *top-down* CommAI seems not only essentially anthropocentric, but also largely symbol based (semiotic), Anglocentric, and narrowly denotative (versus connotative). Each *qualifier* lessens the range of what might be seen as \\u2018general\\u2019 \\u2013 which, you may then agree, is general only in a very narrow sense(?).\\n\\nThis leaves me wondering about \\u2018requirements derived from psychological or mathematical (or even \\u2018simple physics tasks\\u2019 [sec 2]) considerations\\u2019. This seems like a large leap from foundational traits (physics, etc.) to more semiotic roles (CommAI environment), entailing many unexamined assumptions/details. I see your effort to address some that innate ambiguity in the appendix, but it (again) seems largely semiotic in nature, and is thus unsatisfying. The key issue I see here lies in how narrowly or widely defined the \\u2018environment\\u2019 is, within which proposals are tested. For example, game environments are too narrow to compare with \\u2018general uncontrolled variables\\u2019 that typify much of our daily environs. Yes, practical limits are needed as a starting point (a bounded rationale), but is the essentially semiotic view you offer the best starting point?\\n\\nAlternatively, I advocate for a functionalist foundation (true bottoms-up) as a needed complement, or even a precursor, to framing a true general intelligence. As I understand ICLR is not the correct venue for exploring general intelligence, I would appreciate thoughts you may wish to share (e.g., re appropriate venues) on this matter, and/or on the bottoms-up proposal offered in \\u2018A Priori Modeling of Information and Intelligence\\u2019. Regardless, I wish you the best with this worthy project!\"}", "{\"title\": \"further comments\", \"comment\": \"More semantic tasks: I'd say e.g., some of the association and navigation tasks already implemented in our CommAI-env environment might be more \\\"semantic\\\" in the sense you mean. However, for me the CommAI-mini tasks are already semantic, in the sense that there are symbols (the regexps) referring to sets (the string sets).\\n\\nWe try to stick to bits as they allow a maximally agnostic interface (and to define tasks with no added complexity whatsoever), but system developers could certainly implement a BIOS into their system. One thing is the input/output channel, another the constraints one might impose on the \\\"perceptual system\\\" of the learner, so to speak.\\n\\nFinally, we do hope ICLR will accept papers addressing general intelligence, especially as they encourage position papers for the workshop track... Representation learning seems like a core component of any general AI, and, conversely, moving towards more general AIs is a core reason to develop better representation learning methods.\"}", "{\"title\": \"thanks for the further comments\", \"comment\": \"I'll briefly answer to some of the further points you raise...\\n\\n* More details on the leading intuition\\n\\nWe would like to develop an AI that could be helpful to humans by receiving instructions through natural language interaction, and being able to perform them, even if they require them new skills that they did not encounter before.\\n\\n* Syntax vs semantics\\n\\nWe do not agree that the CommAI-mini tasks are purely syntactic. You can see the regular expressions as words denoting stringsets, and the corresponding stringsets as their denotations. It would also be interesting to extend a similar approach to other domains where there is more explicit grounding, e.g., reasoning about simple geometric shapes.\\n\\n* Domains\\n\\nWe fully agree that domains should not be established in a top-down way but they should, if useful, implicitly learned by the machine (if this was your point).\\n\\n* Bit-to-bit\\n\\nWe simply mean that, even if we graphically display ASCII characters as such, the real input to the machine will be one bit at the time, and the same for the machine output (with no assumption that the machine will already know about ASCII or other encoding systems).\"}", "{\"title\": \"PS\", \"comment\": \"Interestingly, GoodAI has now implemented our tasks as part of their general AI challenge: https://www.general-ai-challenge.org/ This should soon tell us whether the tasks are indeed solvable with existing techniques.\"}", "{\"title\": \"Semantics redux\", \"comment\": \"> . . . CommAI-mini tasks are already semantic [with} . . . symbols (the regexps) referring to sets (the string sets)<\\n\\u2022 As *symbols* are involved, for me, this means meaningful interpretation of symbols is needed at some point \\u2013 where the *interpreter* is, in fact, the intelligent agent (programmer/pre-ascribed values, in this case[?]). This means I would say your method is NOT innately semantic or conveying 'an intelligence' beyond what is programmed. This is a tricky area as the syntactic and semantic become entangled across simple-to-complex roles. Thus, trying to debate/parse this matter in a forum like this is pointless. Yes, *some* innately semantic aspects are always inherently entailed, but . . . (further depending on the level of analysis used and the project's ultimate aims . . .)\\n\\n> . . . maximally agnostic interface <\\n\\u2022 Yes, a worthy aim. But in any case, as you point out, always limited by the platform's innate architecture and capacities. (i.e., *not* truly general). As such, I take a more purely informational approach to minimize innate platform issues. But still, at some point such limits must always be seen as somehow present . . . \\n\\n> . . . papers addressing general intelligence <\\n\\u2022 For your information I came across this, with a May 5 submission deadline.\", \"http\": \"//users.dsic.upv.es/~flip/EGPAI2017/#call4papers\\n\\n> Representation learning seems like a core component of any general AI . . .<\\n\\u2022 Easily agreed! In fact, it is THE central defining characteristic I think.\\n\\n\\u2022 I too saw the AI Roadmap project (and the CommAI inclusion) and thought it quite interesting. But the effort also seems very early in its formation. I will explore it further.\\n\\nBest of luck with your project!\"}", "{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes a method for evaluating the capacity of a learning algorithm to function as a \\\"general AI\\\". The proposal consists of two pieces:\\n\\n- high-level desiderata for a general AI: namely, the ability to efficiently learn multiple tasks with shared structure from natural language guidance provided as a generic bit stream.\\n\\n- a concrete task for investigating these desiderata: namely, membership and sampling queries for regular languages specified by regular expressions.\\n\\nI'm honestly not sure what to make of this paper. It's very much in the same spirit as the earlier bAbI tasks, which, while well-intentioned, I think have done active harm to the AI research community by making it socially acceptable to develop methods for toy problems without ever verifying that they actually scale-up to real-world tasks. To use the dichotomy the authors introduce in section 3, the problem is that essentially all meaningful progress in the field has come from \\\"bottom-up\\\" approaches: \\\"top-down\\\" approaches have a poor track record of scaling up, while the set of challenging reasoning problems solved by \\\"bottom-up\\\" approaches continues to grow.\\n\\nOn the other hand, I recognize that mine may no longer be a mainstream position, and that in any case it's unfair to downgrade a position paper because I disagree with the position. It's probably healthy for the community to have this discussion, and the ICLR crowd perhaps needs it most of all.\\n\\nBut I do think the position could be better defended. In particular: we already know that:\\n\\n1. It's easy for RNNs to sample from and query regular languages (learned by example---I don't actually know of work on starting from symbolic REs as done here)\\n\\n2. It's certainly possible to learn from this mixed RL / text-based supervision condition (e.g. Weston 2016).\\n\\nThese two things together make up the whole task. So it's not obvious that we can't solve it by throwing generic RNN machinery at it, and the burden is on the authors of a new task to show that it can't already be solved using state-of-the-art methods. The paper's claim that \\\"We hope the CommAI-mini challenge is at the right level of complexity to stimulate researchers to develop genuinely new models\\\" would be much stronger if it already demonstrated that genuinely new models are required; right now that demonstration is missing.\\n\\nNow suppose we do solve this task (with whatever model). What have we learned? Just that it's possible to quickly learn how to work with regular languages? What notion of compositionality is present in this task and not, e.g. natural language parsing? What have we learned about \\\"learning to learn\\\" that's different from learning good gradient descent algorithms, initializers, RL with natural language instructions, etc.?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"syntax vs. semantics . . .\", \"comment\": \"'reasoning about simple geometric shapes' \\u2013 yes, this seems like a good initial base, also emphasized in my own modeling. Could you point to specific CommAI tasks that you feel are most plainly semantic in nature? I hope to better grasp your view of semantic tasks.\\n\\n'no assumption that the machine will already know about ASCII or other encoding systems' \\u2013 This implies a platform with no BIOS(?). Again, this is puzzling and I am unsure of how/why it applies to, or carries weight in, your larger project.\\n\\nLastly, few proposals here (ICLR) address *general intelligence*. I would appreciate your thoughts on more useful venues/forums for directly addressing this topic.\\n\\nThank you,\"}" ] }
rJeYrsEYg
Unsupervised Feature Learning for Audio Analysis
[ "Matthias Meyer", "Jan Beutel", "Lothar Thiele" ]
Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering.
[ "unsupervised feature", "audio frame predictor", "features", "better results", "audio analysis", "acoustic events", "audio source", "interest", "many applications" ]
https://openreview.net/pdf?id=rJeYrsEYg
https://openreview.net/forum?id=rJeYrsEYg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rJ8yzarjg", "Hk2VdYTje", "S1Lv_ZMjx", "ry_pSceie", "S1gMz6rjl" ], "note_type": [ "comment", "comment", "official_review", "official_review", "comment" ], "note_created": [ 1489519069696, 1490028595829, 1489274974046, 1489180095608, 1489519111930 ], "note_signatures": [ [ "~Matthias_Meyer1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper96/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper96/AnonReviewer2" ], [ "~Matthias_Meyer1" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"Thank you very much for the valuable feedback.\\n\\nIn contrast to images or videos, acoustic events are almost solely characterized by temporal changes. Considering this temporal change is necessary for a good classification (see references in the paper) whereas much information about a video can already be identified from still images. The predictive autoencoder was used instead of a normal autoencoder to exploit this time dependency and has shown better results. These experiments are not part of the current version of the paper due to the page limit. \\n\\nThe submitted paper reflects the state of the work and its core idea and thus has been submitted to the Workshop Track. Therefore the comparison to other methods is missing but indeed necessary for a full evaluation of the proposed method. This is being worked on at the moment. However, we see a fundamental difference to the mentioned approaches (VAE, ladder networks). Due to the pairwise loss an inter-sample comparison is achieved, while the mentioned methods only optimize for the current input sample. From the paper it can be seen that due to this inter-sample comparison we can not only extract features but can make them distinct, which helps for the intended exploration of a dataset. Having said this, a comparison to these other approaches will definitely strengthen the paper.\\n\\nThe dataset has been chosen to be close to the designated application. Therefore key aspects of the presented work rely on the specific application scenario (e.g. time-dependency, variety of sound sources). Available references like for the TIMIT dataset are not suitable for a fair comparison, since the applied algorithms are optimized for speech/phonetic classification while our proposed approach is designed to work for general audio analysis without prior knowledge.\\n\\nDespite its relation to the application the used AED dataset has been chosen because it contains a large number of samples per category (around 20 minutes/category), which is beneficial to train the network, whereas ESC-50 ( https://github.com/karoldvl/ESC-50 ) and DCASE2016 ( http://www.cs.tut.fi/sgn/arg/dcase2016/ ) contain less training samples (3 minutes/category and <1 minute/category, respectively). Therefore a meaningful comparison between the different datasets with the settings from the current paper was not possible. However, the recently released AudioSet ( https://research.google.com/audioset/ ) can fill this gap.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents a convLSTM based audio frame prediction approach as a method of unsupervised learning of representative audio features. Proposed model is trained using a combination of mean squared error as well as a pair wise similarity measure. Model and the training approach are evaluated on the task of audio event classification.\\n\\nWhile the combination of the ideas is novel, the individual elements model and training approach are previously known. It is also not intuitively clear why a predictive auto-encoding would be a good unsupervised feature learning approach for the task of audio event classification, thus I\\u2019d like to see comparisons with some well known basic unsupervised feature learning approaches (e.g. VAE, ladder networks, etc.).\\n\\nResults are presented on an audio event detection dataset which is relatively new and not many reference comparisons are available. To make the paper stronger I\\u2019d also advise authors provide comparisons with other known results on this task, and also to apply their feature learning approach to other well established sound classification tasks (e.g. phone classification in TIMIT).\\n\\nOverall I feel the paper is not strong enough in current shape for ICLR.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper combines several ideas, ConvLSTM autoencoders and a pairwise lose. The idea is to do sound classification/clustering.\\n\\nI feel this paper is more suited towards the signal processing community (i.e., ICASSP/INTERSPEECH). The main problem I have with this paper/task it seems too specific and there isn't enough core-ML contributions for this round of ICLR workshop acceptance. Sequence autoencoders (see Dai et al.,) and ConvLSTM (as cited by authors Zhang et al.,) and pair wise losses (see SIGIR) are not new. Merging all these ideas together is a contribution, but I am not sure it would generate a lot of interest in the ICLR community.\", \"note\": \"This reviewer is unfamiliar w/ the \\\"acoustic event dataset (AED) from Takahashi et al. (2016)\\\" used in evaluation.\", \"citations_missing\": \"\", \"https\": \"//arxiv.org/pdf/1511.01432.pdf (for sequence autoencoders which this model is quite similar).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review and your feedback. The missing citation will be corrected in the next paper revision.\\n\\nWe understand your point that the paper is quite specific in its application, which might not be the preferred application for some people at ICLR, but we submitted the paper to ICLR Workshop Track because the conference topics in the Call for Abstracts include\\n\\n+ Unsupervised, semi-supervised, and supervised representation learning\\n+ Applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field\"}" ] }
rJNa3C4Yg
Performance guarantees for transferring representations
[ "Daniel McNamara", "Maria-Florina Balcan" ]
A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments.
[ "Theory", "Transfer Learning" ]
https://openreview.net/pdf?id=rJNa3C4Yg
https://openreview.net/forum?id=rJNa3C4Yg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SJWBkqmie", "r1RO6jBse", "rywlTsSol", "ByVRBv7jl", "SyRBdFTjx" ], "note_type": [ "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489375032756, 1489513846334, 1489513710972, 1489364428235, 1490028613912 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper124/AnonReviewer1" ], [ "~Daniel_McNamara1" ], [ "~Daniel_McNamara1" ], [ "ICLR.cc/2017/workshop/paper124/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"reasonable first step for an important problem\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper provides generalization bounds for a common practice in transfer learning with deep neural nets, where the representation learned on a source task (having lot of labeled data) is transferred to a target task. It analyzes two settings: (i) when representation learned on the source is kept fixed and a new classifier for the target task is learned on top of it, (ii) when the representation is also fine-tuned for the target task. To the best of my knowledge it seems to be the first work to analyze this setting.\", \"pros\": [\"considers an important problem\", \"well-written paper\"], \"cons\": [\"hard to say if the proof techniques used are novel -- not enough details\", \"doesn't give much intuition on when is fine tuning better than fixed representation\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for the review and the thoughtful feedback. We have made amendments to the paper which address your suggestions.\\n\\nWe have made a few changes to the explanations given for each of the theorems to enhance the readability and clarity of the paper. We have also made a couple of refinements to the paper's notation.\\n\\nWhile the function \\\\omega in Theorem 1 is necessarily abstract, we have added wording describing the role it plays, explaining why it is a necessary assumption and pointing the reader to the example \\\\omega in Theorem 2.\\n\\nWe have provided greater explanation in Section 4 about why assuming lower level features are more transferrable is reasonable in relevant applications.\\n\\nWe have tightened the language describing Theorem 1 to remove the \\\"vague terms\\\" that you mentioned.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for the review and the thoughtful feedback. We have made amendments to the paper which address your suggestions.\\n\\nWe have made a few amendments to the explanations given for each of the theorems to provide additional insights about them. We have also highlighted the novelty of the work, in particular the generality of the sufficient conditions (now mentioned in the abstract) and the arguments used in the neural network example proofs (now mentioned before statement of Theorem 2). Separate to this submission, we have also written a longer paper which includes all proofs. \\n\\nWe have also added a sentence to the third paragraph of the introduction to provide more comparison of the pros and cons of a fixed representation vs fine-tuning.\"}", "{\"title\": \"A new theoretical angle into transfer learning\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes sufficient conditions for success of transfer learning\", \"pros\": \"\", \"originality\": \"To the best of my knowledge, this work is original\", \"significance\": \"transfer learning has lead to considerable improvement in deep learning and a theoretical approach for formulating when and how it succeeds is very important and much needed\\n\\ncons.\", \"clarity\": \"the paper is not completely well-written and in places hard to follow\", \"quality\": [\"overall, I like this paper due to the problem it considers and its approach, however, the paper would improve significantly with filling in the gaps mentioned below:\", \"The authors provide no intuition or insight into how the bound is derived and what the different terms mean, e.g., how does function w look in practice for different datasets? what are the ways to measure or approximate it?\", \"The assumptions are given without any justification of why they are needed and what are the cases that the hold. e.g., The property that is assumed in Theorem 1. Section 4: assuming that lower level features are more transferrable.\", \"There are some vague terms used right before Theorem 1 that are not appropriate for a theory paper: If w does not grow too \\\"quickly\\\", \\\\hat{R} is \\\"small\\\", etc\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
H1aKXVNKx
Predicting Surgery Duration with Neural Heteroscedastic Regression
[ "Nathan Ng", "Rodney A Gabriel", "Julian McAuley", "Charles Elkan", "Zachary C Lipton" ]
Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity.
[ "Deep learning", "Supervised Learning" ]
https://openreview.net/pdf?id=H1aKXVNKx
https://openreview.net/forum?id=H1aKXVNKx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rJ45F7usl", "rkpmOKpsg", "B1PJU9gix", "SkLCopvoe", "H1fdpGtoe" ], "note_type": [ "official_review", "comment", "official_review", "comment", "comment" ], "note_created": [ 1489676684515, 1490028580875, 1489180127449, 1489652686146, 1489739114375 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper70/AnonReviewer3" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper70/AnonReviewer1" ], [ "~Zachary_Chase_Lipton1" ], [ "~Zachary_Chase_Lipton1" ] ], "structured_content_str": [ "{\"title\": \"good motivation but incremental improvements\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes the use of an MLP that predicts both mean and std for predicting the time of a surgical operation.\\nThey extend the method also to Laplace distribution.\\nThe method is simple, not novel, but the combination of the method and the application is novel.\\nWhat worries me is the marginal improvements reported in table 1. Most of the improvement comes from the use of an MLP, more than the prediction of the variance - see the difference between Gaussian and Gaussian HS, and Laplace and Laplace HS.\\nMy conclusion is that the choice of the distribution/loss in conjunction with the use of an MLP is more important than anything else, and in particular, it is more important than predicting variance (which is the main point of the abstract).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Summary:\\n\\nThis work models the distribution of surgery durations using unimodal\\nparameteric distributions (viz., Gaussian and Laplace) by regressing their\\nparameters using multi-layer perceptrons based on patient and clinical\\nenvironment attributes.\\n\\nUsing the uncertainty (or standard-deviation) estimates, they report\\nimprovements of 18% in scheduling surgeries.\\n\\nThis is the first application of heteroscedastic neural regression to clinical\\nmedical data.\", \"comments\": \"1. In Table 1, it is not clear what \\\"Current Method\\\" corresponds to.\", \"assessment\": \"\", \"clarity\": \"The method has been presented clearly, with all the details to reproduce the\\nresults (although it is not clear if the medical data is publicly available).\\n\\nNovelty & Significance:\\n\\nThe method presented uses multi-layer perceptrons (MLPs) for regressing\\nparameters of univariate unimodal parametric distributions, which is not quite\\nnovel, and is a simple application MLP to this specific domain (clinical medical\\ndata).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks for the feedback\", \"comment\": \"Dear reviewer,\\n\\nThanks for the thoughtful feedback. We'd like to offer the following responses and clarifications:\\n\\nGood catch that we didn't define \\\"current method\\\" in the extended abstract. The current method is the ad-hoc times that are currently entered into the system to reserve the rooms. These are the actual \\\"human-expert\\\" times predicted by the surgeons and administrators.\\n\\nWe'd like to point out that this work is more novel than the reviewer acknowledges. While several papers have proposed neural heteroscedastic regression, we are to our knowledge one of only two papers to revisit the idea in the context of modern deep learning (multiple hidden layers, rectifier activations, dropout regularization). Moreover, our paper is the only one, to our knowledge, to demonstrate the efficacy of neural heteroscedastic regression on a dataset of real-world importance. The other paper only tested the idea on generic UCI dataset and the classic papers address synthetic & toy problems.\\n\\nWe'd also like to let the reviewer know that we've gone a step further and improved the results by using gamma distributions. These are especially suited to our problem because the distribution of surgery durations can only be long-tailed on only one side (no surgery can take less than 0 minutes). The gamma predictive distribution indeed gives lower NLL than both Gaussian and Laplace. We plan to update the draft with these numbers and the relevant empirical analysis in the next week.\"}", "{\"title\": \"Fixed mistake in NLL reporting fixed, please re-evaluate the draft\", \"comment\": \"Dear reviewer,\\n\\nThanks for taking the time to review our paper. The purpose of estimating the conditional variance is precisely to get *good estimates of the variance*. \\n\\nThe superiority (in this respect) of the predictions of the heteroscedastic models is born out by figures 1 and 2 which show just how strongly the predicted standard deviations correlate with observed errors. \\n\\nWe realize that some amount of the confusion owes to a bug in our initial reporting. The initial table 1 had a scaling bug in calculating NLL numbers. This reporting bug came to light when we were adding late-breaking results on a gamma predictive distribution. Thus the original table 1 didn't make it clear just how much the heteroscedastic models improve over their homoscedastic counterparts.\\n\\nWe've updated the draft with the fixed numbers and it's obvious that the heteroscedastic modeling fits the observed errors dramatically better.\\n\\nWe also added a line to the results table 1 showing results using a gamma predictive distribution, which slightly outperforms even the best heteroscedastic laplacian regression model. We hope you'll take the chance to re-assess the review.\"}" ] }
BJiMcB4Kl
Training Triplet Networks with GAN
[ "Maciej Zieba", "Lei Wang" ]
Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method.
[ "triplet networks", "discriminator", "gan", "gan triplet networks", "models", "good performance", "classification", "retrieval tasks", "work", "triplet network" ]
https://openreview.net/pdf?id=BJiMcB4Kl
https://openreview.net/forum?id=BJiMcB4Kl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hy_LWhvsx", "BJHbOCmox", "SyFks7Wig", "Sk3mayNpl", "ryxbVdFaig" ], "note_type": [ "comment", "official_review", "official_review", "comment", "comment" ], "note_created": [ 1489645903864, 1489393660845, 1489218273192, 1491496228423, 1490028585455 ], "note_signatures": [ [ "~Maciej_Mateusz_Zieba1" ], [ "ICLR.cc/2017/workshop/paper78/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper78/AnonReviewer1" ], [ "~Maciej_Mateusz_Zieba1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Answers to the stated questions\", \"comment\": \"Dear Reviewers,\\n\\nwe would like to thank you for your commends. \\n\\nBelow we present answers to the stated questions.\", \"q1\": \"Are your experimental results directly comparable to the semi-supervised experiments in (Salimans et al, 2016)? If the main point of this paper is to \\u201cincorporate discriminator in a metric learning task instead of involving it in classification\\u201d, we should have that direct comparison.\", \"a1\": \"A direction comparison of our method with the semi-supervised classification in (Salimans et al, 2016) shows the following result: In (Salimans et al, 2016) the classification accuracy for Cifar10 is 81.27% (4000 labeled examples) and 82.27% (8000 labeled examples). Our method takes the penultimate layer of our triplet network model and obtains the classification accuracy 81.59% (5000 labeled examples) with a 9-nearest-neighbour classifier.\\n\\nHowever, the major benefit of developing this Triplet-based approach is that it does not need to access class labels (as in (Salimans et al, 2016)). Instead, this approach only needs to access the relationship (similar or dissimilar) between some portion of training examples. In this kind of applications classification-based learning models will not work. In addition, our approach will produce a metric that could be applied to search, compare and rank data. This cannot be done effectively through a classifier as that learned in (Salimans et al, 2016). \\n\\nTo highlight the benefits of using our approach, we compared its retrieval performance with two alternatives via the criterion of mean average precision (mAP). Specifically, for Cifar10 our approach achieves mAP=0.6353. If only using triplet (without incorporating GAN), the result is mAP=0.5367; and if only using GAN, the result is mAP=0.2003. This comparison clearly shows the advantage of our approach on learning a better metric for search or retrieval tasks. By the way, we intended to compare with the classification model in (Salimans et al, 2016) in terms of mAP value. However, such result is not available in that work because it focuses on classification. We are now re-training their model to make this comparison.\", \"q2\": \"Is triplet network a well-defined term? \\u201cTriplet networks are one of the most commonly used techniques in deep learning metric (Yao et al., 2016; Zhuang et al., 2016). \\u201c It is not used in these two reference papers.\", \"a2\": \"As far as we have observed, the \\u201ctriplet network\\u201d term has been used by Hoffer & Ailon (2015) (See the title). In the revised version we will cite this paper immediately after the use of \\u201ctriplet network\\u201d to avoid confusion. On the other hand, by referring to (Yao et al., 2016; Zhuang et al., 2016), we just want to express that currently this kind of models have been widely applied in practical metric learning tasks in computer vision.\", \"q3\": \"I also would like to see how the accuracies are sensitive to #labeled examples and #features. It\\u2019d be desired to add more experimental results.\", \"a3\": \"We agree that this kind of additional evaluation would be beneficial. Below we present the classification and retrieval results obtained on MNIST data (m-number of features, N-number of examples, 9-NN is used as classification model). As seen, the performance of our approach is relatively stable, and it improves with the increasing number of labeled examples and features. In addition, we are working on additional experiments for Cifar10. However, it takes more time than on MNIST. We are going to report the results on Cifar10 (including the mAP result mentioned at the end of A1) in Appendix to the extended abstract.\\n\\n \\t\\t N=100 N=200 N=500 N=1000\\nm=16 accuracy\\t97.61% 98.50% 98.59% 98.86%\\n\\t mAP 0.8929 0.9244 0.9588 0.9700\\n \\n \\t\\t m=16 m=32 m=64 m=128 m=256\\nN=100 accuracy\\t 97.61% 98.26% 98.31% 98.69% 98.65%\\n\\t mAP 0.8929 0.9118 0.9056 0.9321 0.9414\"}", "{\"title\": \"review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes to adopt a triplet network as the discriminator in the GANs.\\nSemi-supervised experiments on MNIST and CIFAR show that this approach can outperform either Triplet Network on labeled data or GANs on unlabeled data.\\n\\nI am probably not the best reviewer for this paper, but I think this proposed approach could be interesting to ICLR audience and incline to accept this paper.\", \"question\": \"is triplet network a well-defined term? \\u201cTriplet networks are one of the most commonly used techniques in deep learning metric (Yao et al., 2016; Zhuang et al., 2016). \\u201c It is not used in these two reference papers.\\n\\nI also would like to see how the accuracies are sensitive to #labeled examples and #features. It\\u2019d be desired to add more experimental results.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"This paper describes using triplet loss to train GAN, and obtained better feature, compared to original GAN and original triplet loss.\\n\\nThis work can be viewed as extension of Semi-supervised training with GAN by using triplet loss.\\n\\nThere is a clean gain through the CIFAR-10 experiment. So I recommend to accept.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Revision\", \"comment\": \"The revised version of the extended abstract was uploaded.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
SkPxL0Vte
Deep Pyramidal Residual Networks with Stochastic Depth
[ "Yoshihiro Yamada", "Masakazu Iwamura", "Koichi Kise" ]
In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records. ResNet enables us to make a network deeper by introducing residual learning. Some ResNet improvements achieve higher accuracy by focusing on channels. Thus, the network depth and channels are thought to be important for high accuracy. In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning. We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper. As a result, we confirmed that the methods outperformed the conventional methods; on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
[ "methods", "stochastic depth", "resnet", "channels", "learning", "improvements" ]
https://openreview.net/pdf?id=SkPxL0Vte
https://openreview.net/forum?id=SkPxL0Vte
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HJujaNkqx", "HyiHuFTil", "Bk7Nu3goe", "SkEHXoJcl" ], "note_type": [ "official_review", "comment", "official_review", "comment" ], "note_created": [ 1488043423691, 1490028610874, 1489188906660, 1488069435749 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper118/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper118/AnonReviewer1" ], [ "~Masakazu_Iwamura1" ] ], "structured_content_str": [ "{\"title\": \"Difficult to parse. Lacks details to be useful.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper is unfortunately very difficult to parse due to the language. It combines two methods of the literature and shows an improvement, but in the absence of a standalone detailed description of the models and/or of an open-source implementation reproducing the results, it's not particularly useful as-is. I'd recommend the authors to put together a more complete manuscript detailing the model, preferably with code.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Combine Pyramid Nets and Networks with Stochastic Depth\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Take two ideas that worked, combine them, see if it works better. If the results of the workshop submission are correct, the answer is yes. This paper is extremely light on details, but it is a workshop submission, and the workshop format is a poster, so there should be ample space to highlight the methodology and details of implementation.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Implementation of the proposed methods\", \"comment\": \"Implementation of the proposed methods are available here:\", \"https\": \"//github.com/AkTgWrNsKnKPP/PyramidNet_with_Stochastic_Depth\"}" ] }
B1lpelBYl
Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix
[ "Sebastien Arnold", "Chunming Wang" ]
We introduce a novel method to compute a rank $m$ approximation of the inverse of the Hessian matrix, in the distributed regime. By leveraging the differences in gradients and parameters of multiple Workers, we are able to efficiently implement a distributed approximation of the Newton-Raphson method. We also present preliminary results which underline advantages and challenges of second-order methods for large stochastic optimization problems. In particular, our work suggests that novel strategies for combining gradients will provide further information on the loss surface.
[ "Deep learning", "Optimization" ]
https://openreview.net/pdf?id=B1lpelBYl
https://openreview.net/forum?id=B1lpelBYl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SJqyBefjx", "ryn5ZXlse", "SJIPOKpjl" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489269985807, 1489150355927, 1490028638407 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper157/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper157/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Interesting approach\", \"rating\": \"7: Good paper, accept\", \"review\": \"Compared to the other reviewer I found the approach interesting. While I'm not so keen on exact time complexity, algorithmically the approach seems scalable. I agree that the experimental section is a bit disappointing, and that there might be real concerns on how this particular approximation of the curvature works in practice. But given that it is a workshop submission, I find the proposal very simple and elegant, and I wager that with a bit of care and dedication it could work surprisingly well in practice.\\n\\nMy score however rests heavily on the fact that this is just a workshop submission, I think a lot of work still needs to be done to convert this work in a proper paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Time complexity, baselines, hyperparameter selection\", \"rating\": \"3: Clear rejection\", \"review\": \"I am not quite sure about the time complexity of O(m^3 + n).\\n\\\" The algorithm does require computation of the eigenvalues and eigenvectors of the m \\u00d7 m matrix G^H \\u00d7 G\\\". Should not G^H \\\\times G first be computed? Given than G is in R^{m x n}, I would expect m x n somewhere in the complexity formula. To compute g as the average of gradients you would need m x n, right?\", \"the_experimental_results_are_disappointing\": \"a) SGD as the only baseline and no comparison with second-order methods and their approximates/alternatives\\nb) little networks of 16k parameters which raises the question of scalability \\nc) weird hyperparameter selection \\\"we keep most of our hyper-parameters constant, including learning rates (0.0003 and 0.01)\\\" given that \\\"several experiments diverged when using too large a learning rate, whereas this was beneficial to the convergence rate of SGD\\\" suggesting that the selection of the learning rate was in favor of the proposed approach.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
r1Cy5yrKx
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
[ "Yen-Chen Lin", "Zhang-Wei Hong", "Yuan-Hong Liao", "Meng-Li Shih", "Ming-Yu Liu", "Min Sun" ]
We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack.
[ "Deep learning", "Reinforcement Learning" ]
https://openreview.net/pdf?id=r1Cy5yrKx
https://openreview.net/forum?id=r1Cy5yrKx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "ByuHsCkix", "H1KJ5eBix", "SkSqiNbjg", "rJoUuY6jg", "SJ5o6kCqx", "HJgBOYvsl" ], "note_type": [ "comment", "official_review", "official_comment", "comment", "official_review", "comment" ], "note_created": [ 1489132351732, 1489467873496, 1489222541379, 1490028626748, 1489005985843, 1489635384162 ], "note_signatures": [ [ "~Yen-Chen_Lin1" ], [ "ICLR.cc/2017/workshop/paper142/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper142/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper142/AnonReviewer2" ], [ "~Yen-Chen_Lin1" ] ], "structured_content_str": [ "{\"title\": \"Re: Interesting and relevant topic, clearly work in progress\", \"comment\": \"We thank the reviewer for detail comments. Unfortunately, due to a strict 3 pages limit for ICLR workshop this year, we have to go straight to our method and results in this submission. Similarly due to space, we focus on the attack tactics for this submission.\\n\\nBased on the reviews, we added the following ideas about defending in section C of our Appendix: (1) train RL agent with adversarial example, (2) detect adversarial example first and then try to mitigate the effect. We hope to have enough interesting results on defending attacks to share in the future.\\n\\nWe are indeed working on a 6-8 pages conference submission which will include proper introduction and motivation, and a summary of the related work.\"}", "{\"title\": \"An application of existing NN attacks in an RL setting\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper explains the adaptation of a (Carlini & Wagner, 2016) (mis)classification attack to making the agent choose it's worse (lowest Q score or lower prob for \\\\pi) action instead of best. It also explains an extension of the single time step (s, a, r, s') version to a sequence version, through the use of a forward model (Oh et al., 2015). Side note: the \\\\delta (attack vectors) seem quite significant (the difference in frames in perceptible, e.g. Figure 2).\\n\\nIt is an interesting application of a \\\"classic\\\" attack, the comparison (in terms of performance) to (Huang et al., 2017) is unclear. The experimental evaluation is weak, but sufficient for a workshop.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Thank you for the clarification and update\", \"comment\": \"Sorry, I was not aware of the strict 3-page limit. I will update my review accordingly.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Interesting and relevant topic, clearly work in progress\", \"rating\": \"7: Good paper, accept\", \"review\": \"Thank you for pointing me to this work; I was not aware of work in this area, and the topic is quite exciting.\\n\\nThe main problem with this paper is that it is still very clearly work in progress. The problem is not very well-motivated, and the authors rush right into the content without giving any context (it is almost as if they outright assume the reader has read Huang et al. 2016 immediately before reading the current paper or are very well familiar with it.) This work needs proper introduction and motivation, a summary of the related work it builds on, a smoother narrative, etc. I would reject this paper as a conference submission for these reasons: it is just not ready.\\n\\nDespite this, the authors have results and the topic is very interesting. This is the kind of paper that I think makes an ideal workshop paper: the topic is worth considering and relevant, and results are preliminary but interesting; it could stimulate some discussion which could (a) influence the direction of the work (b) lead to a broader interest in this type of work. So for these reasons, I am recommending accept.\", \"possible_discussion_point\": \"identifying the flaws/vulnerabilities with deep RL-trained policies is only the first step. How do we then modify our deep RL algorithms to produce policies that are robust to these type of attacks? Even some speculation on this point would be nice, as I expect it to be a major discussion point once work in this area matures.\\n\\n------ Edit post-response from authors:\\n\\nThe authors told me about the strict 3-page limit, which I was not aware of. With this limit in mind, I think the authors did a fairly good job of compressing the main ideas and results into the space they had. The page limit does unfortunately still detract from the smoothness of the intro/setup, but the description is still clear enough to understand what follows.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Re: An application of existing NN attacks in an RL setting\", \"comment\": \"Thanks a lot for your comments!\\n\\nI would love to clarify that the \\u201cstrategically-timed attack\\u201d we proposed in our paper also determines \\u201cwhen to attack\\u201d, i.e., it wants to reduce the total rewards gained by the agent by only attacking it at selective timesteps. Therefore, it goes beyond an adaption of misclassification attack to RL tasks. \\n\\nThe reason why the difference in frames is perceptible is that we enlarge the value of perturbation 250x for visualization, sorry for the confusion. We will clarify it in our future revision.\\n\\nAbout the comparison, as we mentioned in our abstract and experiment conclusion, our strategically-timed attack (attacking on average only 25% of timesteps) can reach the same effect as attacking the agent at every timesteps (i.e., Huang\\u2019s strategy).\"}" ] }
rk4fr1HYx
Cosegmentation Loss: Enhancing segmentation with a Few Training Samples by Transferring Region Knowledge to Unlabeled Images
[ "Wataru Shimoda", "Keiji Yanai" ]
We treat semantic segmentation where a few pixel-wise labeled samples and a large number of unlabeled samples are available. For this situation we propose cosegmentation loss which enables us to transfer the knowledge of a few pixel-wise labeled samples to a large number of unlabeled images. In the experiments, we used human-part segmentation with a few pixel-wise labeled images and 1715 unlabeled images, and proved that the proposed co-segmentation loss helped make effective use of unlabeled images.
[ "unlabeled images", "cosegmentation loss", "enhancing segmentation", "training samples", "region knowledge", "samples", "large number", "semantic segmentation", "unlabeled samples" ]
https://openreview.net/pdf?id=rk4fr1HYx
https://openreview.net/forum?id=rk4fr1HYx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "ByMbYo8se", "B1wQbtrol", "rJPIuFTjg" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1489578234107, 1489502494566, 1490028622633 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper136/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper136/AnonReviewer3" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"interesting idea, but not properly evaluated\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The idea of using co-segmentation for semi-supervised segmentation training is potentially interesting - but the authors do not compare it to existing baselines for semi-supervised segmentation.\\nIn particular, the authors claim:\\n\\\" \\u2022 We propose a semi-supervised method for semantic segmentation which requires no imagelevel\\nclass labels for unlabeled samples.\\\"\\n\\nThis is misleading in my understanding - the authors do train on the pascal-parts datasets, where practically every image is known to contain a human (and potentially his parts) - so the class labels are practically there, but just do not need to be specified, since they are always the same.\", \"it_would_not_be_too_hard_to_apply_to_the_same_problem_existing_techniques_for_weakly__and_semi_supervised_learning\": \"Constrained Convolutional Neural Networks for Weakly Supervised Segmentation\\nDeepak Pathak, Philipp Kr\\u00e4henb\\u00fchl and Trevor Darrell\\nICCV 2015\\n\\nWeakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation\\nGeorge Papandreou, Liang-Chieh Chen, Kevin Murphy, Alan L. Yuille, ICCV 2015\", \"the_authors_also_mention\": \"\\\". The evaluation protocol is based on a simple mean intersection over union (IOU). In evaluation, we do not take care of the background class.\\\"\\nI do not see why the authors deviate from a standard evaluation pipeline.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Potentially interesting, but more work is needed. No \\\"exciting new ideas\\\".\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The abstract presents a co-segmentation approach that is trained in a semi-supervised manner. As one would expect, the semi-supervised model works better than a model trained just on the small set of fully supervised data and worse than the model that would be obtained if the unlabeled data were labeled, also. From the abstract, however, it is not entirely clear what ingredients are essential to make the approach work: the proposed approach seems like a straightforward combination of prior work on co-segmentation and producing segmentation masks with image-classification convnets (Oquab & Bottou; Zhou, ..., & Torralba). It performs roughly on par with a prior approach by Papandreou et al. (the image-level labels are unlikely to help much on the dataset that is studied in the abstract).\\n\\nIt is also unclear how well the results compare with other co-segmentation approaches (Joulin, Bach, & Ponce) or with generic object proposal algorithms such as SharpMask (Pinheiro et al.). More in general, human body part segmentation doesn't seem like the right task to be studying segmentation approaches with limited supervision on: there exist many datasets with human and / or body part segmentations, so why not use those annotations? The proposed method seems more suitable for segmentation of infrequent object classes for which few annotated examples (not even image-level annotations) are available.\\n\\nOverall, the approach described here may be of interest, but a lot of additional work is needed to know for sure. Having said that, I think the submission does not meet the bar for the ICLR workshops, because it does not present any clear novel ideas --- it is mostly combining existing approaches in a slightly different learning setting. I would recommend the authors to submit a more detailed version of this study to a venue such as CVPR or ICCV.\", \"minor_comment\": \"Table 2 is very hard to read; the data should be presented in learning curves.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
rkmU-pEFl
Disparity Map Prediction from Stereo Laparoscopic Images using a Parallel Deep Convolutional Neural Network
[ "Bálint Antal" ]
One of the main computational challenges in supporting minimally invasive surgery techniques is the efficient 3d reconstruction of stereo endoscopic or laparosocopic images. In this paper, a Convolutional Neural Network based approach is presented, which does not require any prior knowledge on the image acquisition technique. We have evaluated the approach on a publicly available dataset and compared to a previous deep neural network approach. The evaluation showed that the approach outperformed the previous method.
[ "Computer vision", "Deep learning", "Applications" ]
https://openreview.net/pdf?id=rkmU-pEFl
https://openreview.net/forum?id=rkmU-pEFl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B1EBOt6se", "B1UqyIlse", "HJhCmdeig" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028603819, 1489162125875, 1489171411803 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper106/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper106/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Novelty and evaluation lacking\", \"rating\": \"2: Strong rejection\", \"review\": \"This manuscript presents a seemingly simple method for doing disparity map prediction but compares only to a previous publication of the author's own. There are a dozen papers about this problem in other application domains, and so the methodology from them should be a point of comparison. Instead there is one citation to an obscure conference proceedings paper of the author's previous work, which is likely not competitive with state-of-the-art on this sort of problem.\\n\\nLittle motivation is given, and the model selection strategy is not discussed (it sounds as if early stopping is performed on the test set which is very worrying). Table 2 is essentially vacuous.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"No novelty\", \"rating\": \"3: Clear rejection\", \"review\": \"I completely agree with Reviewer2. The method isn't novel, and compares only to author's previous work.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1ZaRZVKg
On Improving the Numerical Stability of Winograd Convolutions
[ "Kevin Vincent", "Kevin Stephano", "Michael Frumkin", "Boris Ginsburg", "Julien Demouth" ]
Deep convolutional neural networks rely on heavily optimized convolution algorithms. Winograd convolutions provide an efficient approach to performing such convolutions. Using larger Winograd convolution tiles, the convolution will become more efficient but less numerically accurate. Here we provide some approaches to mitigating this numerical inaccuracy. We will exemplify these approaches by working on a tile much larger than any previously documented: F(9x9, 5x5). Using these approaches, we will show that such a tile can be used to train modern networks and provide performance benefits.
[ "Deep learning" ]
https://openreview.net/pdf?id=H1ZaRZVKg
https://openreview.net/forum?id=H1ZaRZVKg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SyBUg-Moe", "BkYmdY6ie", "ry_ADBy9l", "BksAtQeie" ], "note_type": [ "comment", "comment", "official_review", "official_review" ], "note_created": [ 1489272908774, 1490028576757, 1488046031958, 1489152467007 ], "note_signatures": [ [ "~Kevin_Vincent1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper63/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper63/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Clarification of terms\", \"comment\": \"\\u200bThank you for the review.\\n\\nTo clarify, we do not claim a 1.4x speedup for all of Inception-v3. We claim a 1.4x speedup for the single 5x5 convolution layer in Inception-v3. Our proposed F(9x9, 5x5) does not affect any other convolutions or layers in Inception-v3. \\n\\nYou are correct on our use of \\\"successfully trained\\\", we observe practically identical final error rates when using F(9x9, 5x5); compared with both the published network results and our own tests using direct convolutions.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Useful data point on the potential of Winograd convolutions for wider filters.\", \"rating\": \"7: Good paper, accept\", \"review\": \"Good short note on how one might implement bigger support convolutions using the Winograd technique. The heuristics proposed might have wider applicability. It would be great if someone figured out more general principles for automatically designing these kernels in a numerically stable way.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Improved stability and speed for Winograd convs\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper shows how Winograd convolutions can be made more numerically stable for large tile-sizes, which are more efficient. The authors show significant reduction in numerical errors and a roughly 1.4x speed increase for inception-v3, which is quite meaningful.\\n\\nIt is stated that \\\"we have been able to successfully train Alexnet and Inception v3\\\" - does this mean that the final error rate is (almost) unchanged for the network using the new convolution routines?\\n\\nGiven the importance of efficient convolution routines for deep learning and the solid results, I think this paper should be accepted.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ByKjYVEYl
Weak Adversarial Boosting
[ "Sreekalyan Deepakreddy", "Raghav Kulkarni" ]
The "adversarial training" methods have recently been emerging as a promising avenue of research. Broadly speaking these methods achieve efficient training as well as boosted performance via an adversarial choice of data, features, or models. However, since the inception of the Generative Adversarial Nets (GAN), much of the attention is focussed on adversarial "models", i.e., machines learning by pursuing competing goals. In this note we investigate the effectiveness of several (weak) sources of adversarial "data" and "features". In particular we demonstrate: (a) low precision classifiers can be used as a source of adversarial data-sample closer to the decision boundary (b) training on these adversarial data-sample can give significant boost to the precision and recall compared to the non-adversarial sample. We also document the use of these methods for improving the performance of classifiers when only limited (and sometimes no) labeled data is available.
[ "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=ByKjYVEYl
https://openreview.net/forum?id=ByKjYVEYl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hk0QuY6je", "HJ1hUXMcx", "ryxMdi7oe" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028581619, 1488234150730, 1489381384140 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper71/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper71/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Official review For Weak Adversarial Boosting\", \"rating\": \"2: Strong rejection\", \"review\": \"The authors describe recent work where employing adversarial sample data can give significant improvement.\\n\\nThe authors work is not a generative adversarial network (GAN) nor training with adversarial examples, hence I would have difficulty labeling this paper as 'weak adversarial boosting'. In particular, note that adversarial examples are examples generated via gradient propagation in the model that perceptually indistinguishable to humans but are misclassified by the machine learning system. The data the authors describe are instead more like 'hard negatives' as humans judge that the algorithm incorrectly classified these examples.\\n\\nThe authors show that by employing these hard negatives and committee of experts they could improve the quality of the classifier. Both techniques of employing hard negatives and a committee of classifiers are known to be useful for training all sorts of machine learning systems and I do not see what is new in this work.\", \"additional_issues\": [\"Authors have almost no references to prior work including but not limited to GAN's, adversarial examples, boosting, security issues, hard negative mining.\", \"Results based on non-publicly available data so not reproducible.\", \"Results are minimal on one data set and two experiments.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Confusing description of methods and insufficient experiments\", \"rating\": \"2: Strong rejection\", \"review\": \"This work proposes using a weak classifier to produce data to augment a supervised classifier. The way it does is poorly explained and seems similar to existing work on hard negative mining.\\n\\nIn section 3.1, does 'randomly sample outside B' mean sampling from A - B? Does A \\u2229 B mean the set of examples which were positively labeled by A and B? If this is the case I don't see how that would raise the precision of B to that of A, especially since the negatives that are being used to train B come from the positive set of A. This method of training seems very close to that described in section 3.2.\\n\\nIn particular in section 3.2, it is very unclear why using A \\u2229 B as the positive labels would increase the recall of B. Using the intersection would reduce the number of positive labels and seems like it would reduce the recall compared to the original.\\n\\nThe experimental results are also very weak. There is no description of 'correlated classifier' and no clear definition of what it means to be correlated. The authors also describe training on a random 1000 samples but 1000 samples of what? Since there are only 1000 labels, what are the results of training directly on the 1000 labeled examples? Are the 1000 positives from the low-precision classifier just examples from unlabelled data?\\n\\nI also wouldn't describe these techniques as 'adversarial'. Adversarial is normally taken to mean something which intentionally exploits a weakness of the model. None of the described methods intentionally exploit any weakness.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1nFVFNYx
A Smooth Optimisation Perspective on Training Feedforward Neural Networks
[ "Hao Shen" ]
We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties.
[ "Theory", "Supervised Learning", "Optimization" ]
https://openreview.net/pdf?id=S1nFVFNYx
https://openreview.net/forum?id=S1nFVFNYx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJBOzPa5e", "ryBCmxfjg", "HkBDqkp5g", "rk3vGCmse", "SkuVutTog" ], "note_type": [ "comment", "official_review", "official_review", "comment", "comment" ], "note_created": [ 1488970349419, 1489269709160, 1488939612857, 1489392228169, 1490028591933 ], "note_signatures": [ [ "~Hao_Shen1" ], [ "ICLR.cc/2017/workshop/paper89/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper89/AnonReviewer1" ], [ "~Hao_Shen1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"1) I'd like to thank the reviewer for his/her interest in the proposed approach, as well as these constructive comments. In what follows, I address these points accordingly.\\n\\n2) The matrix P is a necessary condition to ensure local minima free in training FNNs. One simple case is the FNN architecture with only one hidden layer. If the number of processing units in the hidden layer is equal to the number of patterns, then the matrix P is guaranteed to be of full rank, i.e., $T \\\\cdot n_{L}$. However, in a general scenario, the rank of matrix P is dependent on the properties of the Khatri-Rao product of identically partitioned matrices. It is worth noticing that a form of column-wise Kronecker product of two matrices is also called the Khatri\\u2013Rao product, which is not the case here. How to ensure matrix P to have a full rank in a general setting is still an open question.\\n\\n3) The assumption of the global minimum being reachable is based on the universal approximation (UA) theorem of FNNs. The UA theorem only guarantees the existence of an FNN, but is unfortunately not constructive. \\n\\n4) I'd be happy to provide more experiments in briefly addressing the reviewer's comments by reducing the introduction.\"}", "{\"title\": \"Review for smooth optimization perspective paper\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper follows an interesting angle on optimizing neural networks. I think the write up can be improved considerably. I'm not sure if this is not the effect of having to restrict itself to 3 pages. E.g. after reading the paper, I'm not sure I know how to implement the proposed approximate Newton method in the paper.Section 3 provides some theoretical analysis of the optimization algorithm, but is not clear that those theorems sums to an algorithm that I could code up.\\n\\nI think (and maybe it is a bit harsh) that even as a workshop submission the paper is not yet clear enough (at least the pdf) and more effort needs to be put in explaining what the different constraints in the theorems mean (and how to achieve them) and in particular how do you get the algorithm.\\n\\nThe other aspect that I feel uncomfortable with, regarding the approach taken by the authors, is that it seems to me that the constraint of rank(P) = T * n_L, where T is the number of examples, spells out that the network memorized the training set rather than learned. IMHO, while pursuing this quest of either proving that the error surface has no \\\"bad\\\" local minima, or removing them is very valuable, is only so if we can get insights of why this works in a case where you *learn* a good solution, i.e. one that generalizes. I would like if the result is somehow independent of the dataset size, and has more something to do with the underlying structure of the data and nature of the network.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting approach, needs more support\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper presents a smooth optimization perspective on feed forward neural networks and discusses a condition where the local optima does not exist in training. Next, by studying Hessian, it develops an approximate newton algorithm and provides an experiment showing the convergence attitude on the four regions classification benchmark.\\n\\nAlthough the approach is interesting, the paper lacks some important pieces: Theorem 1 relies on the matrix P to be full rank but it does not provide any cases or sufficient conditions when this holds. also Theorem 1 assumes the global minimum w* is reachable but does not provide any insights into when this holds. even a couple of examples would be good. I understand that this is a short version but the author could easily fit this in by reducing the introduction.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"The author appreciates the comments from the reviewer and his/her understanding about the challenge in squeezing such a tedious but straightforward analysis in three pages. Apparently, the author didn't succeed it. Nevertheless, these constructive comments will significantly improve the quality of future submissions.\\n\\nA brief technical reply to the reviewer's comments is that \\n1) the condition $rank(P) = T * n_L$ has direct implications about the architecture of the network, and that\\n2) the interpretation of the number T is more subtle than the sample size.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
ryh9ZySFg
Memory Matching Networks for Genomic Sequence Classification
[ "Jack Lanchantin", "Ritambhara Singh", "Yanjun Qi" ]
When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site.
[ "Deep learning", "Supervised Learning", "Applications" ]
https://openreview.net/pdf?id=ryh9ZySFg
https://openreview.net/forum?id=ryh9ZySFg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rkT-S9Vog", "rkVIuKpox", "Bk2eBcEsg", "rJHRNqEix", "SkIbkxbol", "ryql9ogol", "rkzrS94ol", "Syr1r9Nig", "ryfnE5Ejl", "rktp-qEsl" ], "note_type": [ "comment", "comment", "comment", "comment", "official_review", "official_review", "comment", "comment", "comment", "comment" ], "note_created": [ 1489442052736, 1490028620232, 1489442036310, 1489441997242, 1489202941897, 1489185265857, 1489442106044, 1489442013036, 1489441962152, 1489441216840 ], "note_signatures": [ [ "~Jack_Lanchantin1" ], [ "ICLR.cc/2017/pcs" ], [ "~Jack_Lanchantin1" ], [ "~Jack_Lanchantin1" ], [ "ICLR.cc/2017/workshop/paper133/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper133/AnonReviewer1" ], [ "~Jack_Lanchantin1" ], [ "~Jack_Lanchantin1" ], [ "~Jack_Lanchantin1" ], [ "~Jack_Lanchantin1" ] ], "structured_content_str": [ "{\"title\": \"Comment 1\", \"comment\": \"Q1: The most substantial problem is the authors' suggestion that the memory in their model is \\\"dynamically learned\\\" without any description of how that aspect is implemented. Though the authors describe how the memory is used, I'm clueless as to how the memory is specified or \\\"learned\\\". This is a significant absence, and a dealbreaker for my recommendation as a reviewer.\", \"a1\": [\"Thank you for pointing out this issue in our writing as it is certainly an important part of the paper. We have revised Section 2 of the manuscript by adding the following descriptions:\", \"Each memory module is learned via a lookup table with a constant input at each position (e.g. 1 as input to the first position and t as input to position t).\", \"To learn each of the L memory matrices, we use a separate lookup table.\", \"Essentially each vector of a lookup table learns to encode the representation of that position with an embedding vector of dimension p, where p is a hyperparameter we specify (more details below in A3).\", \"Each lookup table is progressively learned during training. However, it is indeed different from the writing module in traditional memory papers such as the Neural Turing Machine.\", \"We encode the test input using another lookup table with the actual sequence as input to produce the input matrix S. Thus, this lookup table is of dimension p, with input size 4 (for A,C,G,T, which are represented as 1,2,3,4).\"]}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Comment 2\", \"comment\": \"Q2: Furthermore, the authors also decline to describe the second layer of LSTMs in the form of the g function. The authors state that the f function maps DNA sequences to a vector, and they state that the g' function shares the weights from f. The authors suggest that g is an LSTM, but fail to explain why or how the LSTM operates on a single vector.\", \"a2\": [\"Thank you for helping us realize this issue. Our old writing was indeed confusing. We have revised Section 2 with a much better logic flow.\", \"f() is a bidirectional attention LSTM on S\", \"g\\u2019() is a bidirectional attention LSTM on each M_i to encode position dependencies of all t positions in M_i. g\\u2019() shares the same weights with f(). In other words, the same LSTM maps not only S into a vector, but each memory matrix M_i.\", \"g() is another bidirectional LSTM (without attention) taking the outputs of g\\u2032(M_1), g\\u2032(M_2), \\u2026, g\\u2032(M_L) as inputs to encode dependencies among the memory motifs. The g\\u2019() output at each index 1, 2, .., L, produces the final memory vectors m_1, m_2, \\u2026, m_L.\"]}", "{\"title\": \"Comment 4\", \"comment\": \"Q4: If the authors care about producing a useful tool for biologists, they will have to relax their assumption of a dataset balanced between positive and negative examples. TFBS in mammalian genomes is picking needles out of a haystack.\", \"a4\": [\"Thank you for bringing up a very important aspect in this line of work.\", \"The current datasets we use for comparing the proposed matching-network with baselines are from the Alipanahi et. al, Nature Biotech 2015 paper: Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning\", \"Alipanahi et. al used balanced datasets for TFBS tasks. To compare with this state-of-the-art CNN baseline, we use the same datasets as they did.\", \"We certainly agree with this view, and we have been working on creating a dataset which has a more realistic split of samples.\"]}", "{\"title\": \"Interesting direction, incomplete description\", \"rating\": \"3: Clear rejection\", \"review\": \"The authors suggest a novel approach to classifying transcription factor (TF) binding to DNA sequences based on a neural network model that utilizes memory data structures. In the authors' experiments on a previously studied dataset, their technique exceeds the accuracy of convolutional and recurrent neural networks.\", \"pros\": \"-The authors have taken on an important problem--TF binding is critical to gene regulation and our present models are insufficient to precisely predict bound sites in large mammalian genomes. Advances in this space have great value, and the authors may have an advance here.\", \"major_cons\": \"-The most substantial problem is the authors' suggestion that the memory in their model is \\\"dynamically learned\\\" without any description of how that aspect is implemented. Though the authors describe how the memory is used, I'm clueless as to how the memory is specified or \\\"learned\\\". This is a significant absence, and a dealbreaker for my recommendation as a reviewer.\\n-Furthermore, the authors also decline to describe the second layer of LSTMs in the form of the g function. The authors state that the f function maps DNA sequences to a vector, and they state that the g' function shares the weights from f. The authors suggest that g is an LSTM, but fail to explain why or how the LSTM operates on a single vector.\", \"minor_cons\": \"-The authors suggest that the memory matrix column size p can take values other than 4. Doesn't this dimension refer to the 4 nucleotides? When p is set to different values, how are the authors representing the DNA sequences?\\n-If the authors care about producing a useful tool for biologists, they will have to relax their assumption of a dataset balanced between positive and negative examples. TFBS in mammalian genomes is picking needles out of a haystack.\\n-The language in the paper is sloppy in multiple places. E.g. current TFBS motifs aren't \\\"manually\\\" defined as the authors state; they are computationally defined using different methods. The authors also suggest that doctors may be reluctant to use their model, which is irrelevant; doctors do not examine TFBS.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"simple model, interesting application\", \"rating\": \"7: Good paper, accept\", \"review\": \"This work uses a softmax over a set of trained templates to predict whether or not a given protein will bind to a DNA sequence. Given that the templates are fixed, it seems a bit of a stretch to refer to the model as a \\\"memory\\\" model; but the task and the results are nice.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Response to comments of Reviewer 2\", \"comment\": \"We would like to thank Reviewer 2 for the thorough and important comments which made our manuscript unclear. We have revised the paper accordingly, and we hope that we have properly explained the missing details which were crucial to understanding our methods. Below we explain our responses (in separate comments since openreview won't allow them all to be in one post).\"}", "{\"title\": \"Comment 3\", \"comment\": \"Q3: The authors suggest that the memory matrix column size p can take values other than 4. Doesn't this dimension refer to the 4 nucleotides? When p is set to different values, how are the authors representing the DNA sequences?\", \"a3\": [\"Thank you for asking this since it is important to understand how our model encodes the DNA sequence both in the input and memory spaces.\", \"Related to our answer to Q1, p is a hyperparameter we specify since it is simply the embedding dimension. We tried the case of setting p=2, 4, 8, 16. p=4 gave the best overall classification performance.\", \"Since each position of both the input DNA sequence and memory units are represented in an embedding space of dimension p, we can vary the hidden dimension p to any size.\", \"Interestingly, we also tried another memory-NN structure by (1) encoding the input sequence as one-hot vectors, (2) setting p=4 for the memory lookup tables, and (3) then adding a softmax operation on each column vector of M_i. The rest of the network are the same as the proposed. train the whole network. Training on the same datasets, this actually resulted in better accuracy (mean AUC of 0.94), but we implemented this after the submission so the results were not reported. We hypothesized that the softmax outputs from the memory units in this new structure can learn probability distributions of the 4 nucleotides (A,C,G,T) at each memory position.\"]}", "{\"title\": \"Comment 5\", \"comment\": \"Q5: The language in the paper is sloppy in multiple places. E.g. current TFBS motifs aren't \\\"manually\\\" defined as the authors state; they are computationally defined using different methods. The authors also suggest that doctors may be reluctant to use their model, which is irrelevant; doctors do not examine TFBS.\", \"a5\": [\"Thank you for pointing out the wording issues.\", \"\\u201cManually created\\u201d was a poor choice of words there. We have removed any reference of \\u201cmanual creation\\u201d in the revised manuscript.\", \"Additionally, we should have said \\u201cbiomedical researchers\\u201d instead of the term \\u201cdoctors\\u201d. We have updated this in our manuscript.\"]}", "{\"title\": \"Response to the comments of Reviewer 1\", \"comment\": [\"We would like to thank Reviewer 1 for the helpful comments on improving the clarity of our paper.\", \"We agree the memory modules we learned from training is a bit different than the memory module in the neural turing machine since there is no explicit \\u201cwriting\\u201d scheme.\", \"But since they are implicitly learned and written to via backprop using the training samples, we like to think of them as memory which we can read from.\", \"We agree that the memory \\u201cunits\\u201d are indeed \\u201ctemplates\\u201d, which is a better choice of wording. We have added this distinction into the manuscript and use the term \\u201cmemory templates\\u201d instead.\"]}" ] }
B1fUVMzKg
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
[ "Xun Huang", "Serge Belongie" ]
Gatys et al. (2015) recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called \emph{style transfer}. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization~(AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles.
[ "Computer vision", "Unsupervised Learning", "Applications" ]
https://openreview.net/pdf?id=B1fUVMzKg
https://openreview.net/forum?id=B1fUVMzKg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "H1RsTFssl", "Hy9L2DDjl", "r1vzdFaie", "ByQ8u0Ace", "SJKTic4ie" ], "note_type": [ "comment", "comment", "comment", "official_review", "official_review" ], "note_created": [ 1489898918191, 1489628241624, 1490028559001, 1489066058647, 1489443777038 ], "note_signatures": [ [ "~Xun_Huang1" ], [ "~Xun_Huang1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper29/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper29/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Updated results\", \"comment\": \"We have updated new results with improved quality and qualitative comparisons with Ulyanov et al. 2017, Chen and Schmidt 2016, and Gatys et al. 2016. The main difference is to use relu4_1 instead of relu3_1 of the VGG network.\"}", "{\"title\": \"Added more experiments to test our hypothesis\", \"comment\": \"Thanks for the helpful feedback!\\n\\nWe have added experimental results in appendix to support our hypothesis that instance normalization (IN) does perform a kind of style normalization. Ulyanov et al. attribute the success of IN to its invariance to the content image contrast. We find this explanation unsatisfactory, as IN remains effective even when all images are already contrast normalized (Fig. 3(b)). However, the improvement brought by IN is much smaller when images are already style normalized (Fig. 3(c)).\\n\\nOur method is not as good as the single-style transfer method (Ulyanov et al.) for some images. We believe this is not unexpected, and does not conflict with our assumption. We train 1 network for ~80000 styles and test it on new styles, while Ulyanov et al. fit 1 network to 1 style and test it on the same style. It is not surprising that the latter should fit the objective better while sacrificing flexibility.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Interesting direction for fast style transfer\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The submission proposes a method for fast style transfer with arbitrary styles. A content and style image is encoded in the \\u2018relu3_1\\u2019 feature space of the VGG-19 network. Then the content feature maps are shifted and scaled to match the mean and variance of the style feature maps. Finally a decoder network is trained to invert the representations back to image space.\\n\\nThis is a promising direction for fast style transfer with arbitrary style targets. The results so far are not bad but also not particularly compelling. \\nNevertheless, I would expect them to improve to a level comparable to other fast style transfer methods with a little more engineering, e.g. by improving the decoder.\\n\\nAll in all this submission could be appropriate for presentation at the ICLR Workshops.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Minor tweak for doing style transfer quickly\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary: The paper proposes a novel approach for doing style transfer in a fast manner. The authors conjecture that the instance normalization in the feature space performs style normalization. Inspired by their hypothesis they propose a new module, which they call, Adaptive Instance Normalization (AdaIN) which adaptively normalizes the input to an arbitrary given style. The entire system is a feed-forward network and hence transferring the style is extremely fast. The output images shown in the paper partially validate the author's hypothesis.\\n\\nI think the paper proposes an interesting tweak to doing fast style transfer. The authors validate their hypothesis over a handful of images. While the results show that the style transfer does happens, the results are not as good at the previously proposed approaches. As a result I feel that there's more to it than the author's simple hypothesis. Nevertheless the paper proposes an interesting idea and can potentially be worth talking about as a workshop presentation.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJ_X2yHFe
Semantic embeddings for program behaviour patterns
[ "Alexander Chistyakov", "Ekaterina Lobacheva", "Arseny Kuznetsov", "Alexey Romanenko" ]
In this paper, we propose a new feature extraction technique for program execution logs. First, we automatically extract complex patterns from a program's behaviour graph. Then, we embed these patterns into a continuous space by training an autoencoder. We evaluate the proposed features on a real-world malicious software detection task. We also find that the embedding space captures interpretable structures in the space of pattern parts.
[ "Deep learning", "Unsupervised Learning", "Applications" ]
https://openreview.net/pdf?id=BJ_X2yHFe
https://openreview.net/forum?id=BJ_X2yHFe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HJNYaPO5e", "ByAj-dvil", "r1lnLdFasg" ], "note_type": [ "official_review", "official_review", "comment" ], "note_created": [ 1488645499836, 1489629605864, 1490028628443 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper144/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper144/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Nice idea\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"The paper proposes converting the behavior of a particular program into a graph of verbs and objects. This graph is then converted into a vector via an autoencoder. The vector is then used to classify the program into either malware or clean. Very nice experimental results (i.e., testing at a very low false-positive rate).\\n\\nA perfectly reasonable system, professionally executed. Definite accept to workshop (rating is with respect to workshop acceptance)\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes feature extraction based on program execution logs using a behavior graph. Pattern embeddings are obtained through a trained autoencoder. The idea is interesting, but there are not enough explanations on the algorithm and the experiments. For example, if the evaluation is to be measured by malware detection, is it possible to apply supervised learning (with DNN) to the problem? What is the difference of the performance? Will the extracted features by the propose algorithm bring good performance to tasks other than the malware detection? More detailed explanation would make the paper better.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
r1X_kR4Yl
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
[ "Ji Gao", "Beilun Wang", "Zeming Lin", "Weilin Xu", "Yanjun Qi" ]
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepCloak. By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepCloak is easy to implement and computationally efficient. Experimental results show that DeepCloak can increase the performance of state-of-the-art DNN models against adversarial samples.
[ "Deep learning" ]
https://openreview.net/pdf?id=r1X_kR4Yl
https://openreview.net/forum?id=r1X_kR4Yl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rJ6c4T6qx", "rkluB_FTox", "BkbuhP1ix", "H1oPHENig", "rkC_BFJjx" ], "note_type": [ "official_review", "comment", "comment", "comment", "official_review" ], "note_created": [ 1488995477399, 1490028608450, 1489103977004, 1489417571368, 1489110390309 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper114/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Ji_Gao1" ], [ "~Ji_Gao1" ], [ "ICLR.cc/2017/workshop/paper114/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Interesting, but unclear applicability\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes an approach that masks out those features preceding the last fully connected layer that are most sensitive to changes in adversarial examples in order to improve its robustness.\\n\\nAlthough, it is a workshop paper, some details should be clarified more. For example it is unclear whether the masking happens across locations or also it depends on the spatial location of the neuron, as it is applied on a max-pooling layer.\\n\\nAlthough it is irrelevant for the point they try to make, they claim the starting model they utilize is state of the art for accuracy, which is not the case.\\n\\nThe most interesting observation (and the weak point at the same time) is that adversarial robustness is greatly improved after masking a single feature output, but addition masking has relatively large negative effect on the final accuracy, while improving the adversarial robustness insignificantly. This could have several possible explanations that paper fails to explore, but accepts only the first explanation as correct:\\n- This is a fundamental phenomenon that occurs generally \\n- This is specific to this data set.\\n- This is specific to this model\\nGiven the lack of additional experimental evidence, it is hard to assess which possibility is most likely.\\n\\n[Edit]: Given the latest updates to the paper that have significantly increased the quality, I have revised my score up.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"Dear Reviewer:\\n\\nThank you for your valuable comments! We have revised our paper according to your suggestions. More specifically,\", \"a0\": \"We just realize that there is a famous project from Facebook Research also called DeepMask. Therefore we change our method name to DeepCloak.\", \"q1\": \"\\u201cSome details should be clarified more. For example it is unclear whether the masking happens across locations or also it depends on the spatial location of the neuron, as it is applied on a max-pooling layer.\\u201d\", \"a1\": \"Thank you for pointing out this unclear part. The mask layer is applied on the output of the max-pooling layer in the example shown in Figure 1. We have added necessary details as follows,\\n1. We have revised the description of our method on Page 2.\\n2. We have revised the schematic Figure 1 by adding more illustrations and made the figure clearer. \\n3. We have modified the illustration of Algorithm 1 to make it clearer.\\nAll above revisions we have made are included in Section 3 (Page 2 and Page 3).\", \"q2\": \"\\u201cAlthough it is irrelevant for the point they try to make, they claim the starting model they utilize is state of the art for accuracy, which is not the case.\\u201d\", \"a2\": \"You\\u2019re definitely right. We have removed this wrong claim from our paper. The state-of-the-art algorithm on CIFAR-10 is Wide Residual Network[1]. In Table 1, we show the result of another popular model Residual Network.Our Residual Network is properly trained and achieves reasonable accuracy. \\nIn addition, we have added results of the Wide Residual Network into a new Table 3, in the appendix of the revised version of the paper.\", \"q3\": \"\\u201cWhy adversarial robustness is greatly improved after masking a single feature output, but addition masking has relatively large negative effect on the final accuracy, while improving the adversarial robustness insignificantly.\\u201d\", \"a3\": \"Thank you for bringing our attention to this interesting question. To give this question a proper answer, we have added results about three more DNN models. These experimental results have been put into the appendix.\\nIn summary, our experiment currently cover 4 different models, including a small CNN on the MNIST dataset and VGG, ResNet, Wide ResNet on the CIFAR-10 dataset (See Section 6.3)\\nOur experimental results indicate the effectiveness of DeepCloak is related to DNN model structure. \\nModel structures like Residual Network reduce the number of the features in training. In such networks, only masking a small number of features can lead to a significant increase in adversarial performance. However, when more features are masked out, DeepCloak leads to the performance decrease since necessary features might be masked as well. \\nNetworks like VGG model seem to include more unnecessary feature dimensions. So DeepCloak needs to mask more nodes to improve the adversarial performance. At the same time, masking more nodes doesn\\u2019t lead to the performance decrease as fast as the case in Residual Network.\\nWe will investigate the interesting question by implementing more experiments on different models and datasets in a longer version of our paper.\\n\\n[1]: Zagoruyko, Sergey, and Nikos Komodakis. \\\"Wide residual networks.\\\" arXiv preprint arXiv:1605.07146 (2016).\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"Dear Reviewer:\\n\\nThank you for your valuable comments! We have revised our paper according to your suggestions.\", \"q1\": \"\\u201cThe motivation of this method is to remove irrelevant feature. But it is questionable that this is the only reason of the existence of adversarial examples. It then leads to the question how important this factor is. \\u201d\", \"a1\": [\"Thank you for pointing out this issue of writing.\", \"We have revised the caption of Figure~2 into \\u201cOne possible type of adversarial vulnerability when learning a linear classifier from unnecessary features.\\u201d\", \"Yes. We totally agree that unnecessary features may not be the only reason for the existence of adversarial examples.\", \"Methods like Jacobian-based saliency map approach, JSMA[1] limit the perturbation on a few number of feature dimensions. Figure-2 is a simple sketch of example attacks in this category.\", \"To defend against similar attacks like JSMA, DeepCloak may be a quick and simple strategy by reducing the number of unnecessary features. We are in the process of adding JSMA based experimental results.\"], \"q2\": \"\\u201dDetermining irrelevant features by comparing the magnitude of the changes between original sample and the adversarial sample seems questionable. Further investigations are needed to verify this measurement.\\u201d\", \"a2\": [\"We want to clarify the question and answer accordingly.\", \"For each pair of seed sample and its adversarial sample (for a fix \\\\epsilon), we use an entry-wise L1 to get a difference vector; then we get an accumulated vector by summing up difference vectors from all such pairs of training samples; then the mask is learned by assigning top $k$ entries of the accumulated vector into 0 and the rest into 1.\", \"The accumulated vector provides a summary of how each feature dimension (from g(x) layer) varies across a population of samples.\", \"Presented in Figure-2, the basic motivation of DeepCloak is that the distance between an adversarial sample and its seed example will be small along necessary feature dimension (horizontal direction in Figure-2). The distance is relatively large along the unnecessary dimensions (vertical direction in Figure-2) for the current task. DNN combines feature extraction and classification in one model. Therefore, we propose to remove unnecessary features for a DNN model (by masking the feature vector from the output of g(x) layer). This mask layer directly modifies DNN network structure and requires no retraining.\", \"The feature difference vector between adversarial sample and seed sample is one simple and intuitive measurement that can find out which features are used more or less by adversarial samples. Currently, we are using the entry-wise L1 to capture such a relationship. We are going to try different distance measures as the next step.\"], \"q3\": \"\\u201dThe experimental results of the paper seems weak. Especially, although the results indicate improvement in robustness, it is not as good as the current results in the literature.\\u201d\", \"a3\": [\"We agree that our current result is not as good as from adversarial training.\", \"However, most recent defense approaches like adversarial training are retraining based methods;\", \"Adversarial training[2] injects adversarial samples with correct labels in the training set to retrain DNN model. It is computationally expensive and has been claimed to only works in defending against the attack that produced the added adversarial examples.\", \"Our method requires no further training and directly modifies DNN model structure. This means our approach is complementary to other popular defense methods.\", \"We will add our method on top of adversarial training to compare and check its effectiveness.\"], \"q4\": \"\\u201dSome comparative experiments should also be tested to better justify the new method. For example, replacing the masking layer by a layer of dropout or pooling, what would be the results? \\u201d\", \"a4\": [\"Thank you for your comment.\", \"We have added experimental result that compares DeepCloak with a random mask layer (which works the same as a dropout layer in the test phase) to justify our method.\", \"The new results are added in the Appendix, Section 6.3.2 (Page 6 - Page 7). Our results indicate that RandomMask has almost no effect on adversarial accuracy. DeepCloak achieves much better performance in comparison with RandomMask.\"], \"reference\": \"[1]: Papernot, Nicolas, et al. \\\"The limitations of deep learning in adversarial settings.\\\" Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 2016.\\n[2]: Szegedy, Christian, et al. \\\"Intriguing properties of neural networks.\\\" arXiv preprint arXiv:1312.6199 (2013).\"}", "{\"title\": \"Official Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes another method to improve the robustness of DNN classifiers. The proposed method, deepcloak, attends to remove unneccesary features in the feature layer by inserting a 0-1 filter for the feature. The experimental results seem a little bit weak.\\n\\nThe writing of the paper is good. The idea of the deepcloak is new, but not quite convincing:\\n1. The motivation of this method is to remove irrelevant feature. But it is questionable that this is the only reason of the existence of adversarial examples. It then leads to the question how important this factor is. \\n2. Determining irrelevant features by comparing the magnitude of the changes between original sample and the adversarial sample seems questionable. Further investigations are needed to verify this measurement.\", \"pros\": \"1. The idea of the proposed method, deepcloak, is new.\", \"cons\": \"1. The motivation and the methodology of this paper seem questionable. It would need better justifications.\\n2. The experimental results of the paper seems weak. Especially, although the results indicate improvement in robustness, it is not as good as the current results in the literature.\\n3. Some comparative experiments should also be tested to better justify the new method. For example, replacing the masking layer by a layer of dropout or pooling, what would be the results?\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SkCmfeSFg
Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels
[ "Alexey Romanov", "Anna Rumshisky" ]
Learning a better representation with neural networks is a challenging problem, which was tackled extensively from different prospectives in the past few years. In this work, we focus on learning a representation that could be used for clustering and introduce a novel loss component that substantially improves the quality of produced clusters, is simple to apply to an arbitrary cost function, and does not require a complicated training procedure.
[ "discovering", "representations", "exhaustive labels", "better representation", "neural networks", "challenging problem", "different prospectives", "past", "years", "work" ]
https://openreview.net/pdf?id=SkCmfeSFg
https://openreview.net/forum?id=SkCmfeSFg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "H1s9EJLje", "H16wuYTog", "rkNOtxihl", "SyFCldOse", "BkI82h0ql" ], "note_type": [ "official_review", "comment", "comment", "comment", "official_review" ], "note_created": [ 1489527955520, 1490028644775, 1490909548141, 1489694929057, 1489058893975 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper165/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Alexey_Romanov1" ], [ "~Alexey_Romanov1" ], [ "ICLR.cc/2017/workshop/paper165/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"A new auxiliary cost function to learn representations useful for clustering.\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper introduces an auxiliary cost function which forces the representation learnt in a particular layer to be useful for clustering. This can be added to any classification model or unsupervised model like autoencoder. Authors show that adding this loss helps in better clustering of examples in a binary classification task and auto-encoder task. The model does not have access to any cluster information and it learns to group examples based on their characteristics.\\n\\nThe proposed objective function roughly maximizes the KL-Divergence between the probability distributions induced from the row vectors in a weight matrix. I like the idea of directly considering weights instead of considering the unit activations as done in DeCov. Also from experiments, proposed approach does better than DeCov.\\n\\nWhile this is an interesting idea, I encourage the authors to verify the benefits of the proposed loss in a variety of datasets. Also it looks like one can combine both DeCov and the proposed loss. They look complementary.\\n\\nCan you also plot Figure 2 with DeCov loss?\", \"pros\": [\"New cost function for learning representations useful for clustering\", \"Proof of concept experiments that show the method works.\"], \"cons\": [\"Needs benchmarking with several tasks and datasets.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"We added the figure for DeCov loss\", \"comment\": \"Dear reviewer,\\n\\nWe have updated the paper with the figure for the DeCov loss. The observed trend (a strong activation of one of the neurons in the penultimate layer) is even clearer in the case of DeCov, which corresponds to the lowest AMI score on the clustering task.\"}", "{\"title\": \"Response to reviewers\", \"comment\": \"We would like to thank the reviewers for their thoughtful comments. We will update the abstract according to your suggestions.\", \"reviewer1\": \"- \\u201cI don't see how weight matrices with \\\"different rows\\\" would yield better representations. There is a complex interaction with other layers, learning algorithm, nonlinearities and hidden states which could easily destroy the intended benefits of this regularizer.\\u201d\\n\\nConsider what the learned representation looks like without the loss. When trained for binary classification, the network basically learns the weights that correspond to two patterns of activations. That means, the result of multiplication of the weight matrix by the input vector by will produce a vector that will be very uniform except for two neurons activated for the corresponding outcomes.\\n\\nThe only way to get better, disentangled representations for a fixed input from the previous layer, is to have the rows of this matrix be sufficiently different from each other. Our loss component forces the rows to be different effectively leading to disentangled representations for different underlying classes in the input data.\\n\\nHowever, we agree that in more complex deep networks it might not be enough apply the loss component to the penultimate layer only. In fact, in the full version of the paper, we propose a modified version of the loss that works on the full network and achieves better results. \\n\\n- \\u201cIt's also not clear why looking at the rows in probabilistic sense (through the softmax and KL) is necessary at all. Why not simply taking the L2 for example?\\u201d\\nIndeed, the proposed approach is universal in that sense. We conducted our initial experiments using the KL divergence as a \\u201cmeasure\\u201d of similarity; we expect other metrics to work as well.\", \"reivewer2\": \"\\u201c- Needs benchmarking with several tasks and datasets.\\u201d\\nWe agree, and we have done more experiments since the abstract was submitted. In particular, we wanted to make sure that we validate the proposed loss component on two most common types of models: RNN and CNN. We therefore experimented with a CNN model on the CIFAR-10 dataset and found that the proposed loss leads to a better clustering in terms of AMI scores in case of CNN as well. \\n\\n\\u201c- Can you also plot Figure 2 with DeCov loss?\\u201d\\nYes, we will be happy to update the abstract with a figure for DeCov loss.\"}", "{\"title\": \"Insufficient motivation for the proposed solution\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a regularizer to promote representations that\\ncan easily be clustered in the context of time series classification.\\n\\nThe regularizer proposed by the authors looks at the pairwise KL between \\nthe softmax-induced probability distribution from each row of the weight matrix. \\nThe rationale is that a weight matrix with rows that are \\\"different\\\" enough \\nfrom each other would yield more diversity in the representation. \\n\\nWhile favorable experiments are provided, I am not convinced in the reasoning underlying \\nthe proposed cost function. I don't see how weight matrices with \\\"different rows\\\"\\nwould yield better representations. There is a complex interaction with other layers, \\nlearning algorithm, nonlinearities and hidden states which could easily destroy \\nthe intended benefits of this regularizer. It's also not clear why looking at the \\nrows in probabilistic sense (through the softmax and KL) is necessary at all. \\nWhy not simply taking the L2 for example ?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rJV7l2VFg
Playing SNES in the Retro Learning Environment
[ "Nadav Bhonker", "Shai Rozenberg", "Itay Hubara" ]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carried out in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) has become a commonly used benchmark environment allowing algorithms to trainon various Atari 2600 games. In many games the state-of-the-art algorithms out-perform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles.The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining a simple unified interface. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility. A more extensive paper describing our work is available on arXiv
[ "Deep learning", "Reinforcement Learning", "Games" ]
https://openreview.net/pdf?id=rJV7l2VFg
https://openreview.net/forum?id=rJV7l2VFg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B1A4_F6sx", "BJFj1Plje", "Sk4XFVUsx" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028598155, 1489166241227, 1489549596071 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper99/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper99/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"An interface to many games without established guidelines and protocols\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents the Retro Learning Environment (RLE), which provides a unified interface to Atari, SNES, Sega Genesis and other consoles. Good interfaces to interesting reinforcement learning tasks are potentially very useful. One good example is the Arcade Learning Environment (ALE) of Bellemare et al. This project could go on to have a similar effect on the reinforcement learning community, but not in its current form. ALE succeeded partly because it provided a thorough set of benchmarks and suggested evaluation protocols. In order to encourage wide adoption of RLE the paper should provide guidelines for sets of interesting task, how to deal with practical issues like selection of difficulty levels, more benchmarks for interesting games, etc.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Good workshop candidate, though limited scientific contribution\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents a new environment similar to the Arcade Learning Environment, but able to support more consoles and games. It also reports performance on four SNES games, with three standard deep reinforcement learning algorithms, showing that reaching human-level performance is more challenging than on Atari games.\", \"i_believe_this_is_a_good_fit_for_a_workshop\": \"although it has limited scientific contribution in its current form, having a new environment with the ability to play more complex games may eventually lead to significant advances in the field, so it is good to promote it. That being said, although the potential is there, it is hard to tell whether this environmnent will \\\"take off\\\" in the community, since the \\\"competitors\\\" (ALE, OpenAI Universe, DeepMind Lab...) currently enjoy higher popularity. In my opinion the only way to know is to wait and see... and giving this environment a chance to get known can't hurt!\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkXKUTVFl
Explaining the Learning Dynamics of Direct Feedback Alignment
[ "Justin Gilmer", "Colin Raffel", "Samuel S. Schoenholz", "Maithra Raghu", "and Jascha Sohl-Dickstein" ]
Two recently developed methods, Feedback Alignment (FA) and Direct Feedback Alignment (DFA), have been shown to obtain surprising performance on vision tasks by replacing the traditional backpropagation update with a random feedback update. However, it is still not clear what mechanisms allow learning to happen with these random updates. In this work we argue that DFA can be viewed as a noisy variant of a layer-wise training method we call Linear Aligned Feedback Systems (LAFS). We support this connection theoretically by comparing the update rules for the two methods. We additionally empirically verify that the random update matrices used in DFA work effectively as readout matrices, and that strong correlations exist between the error vectors used in the DFA and LAFS updates. With this new connection between DFA and LAFS we are able to explain why the "alignment" happens in DFA.
[ "Theory", "Supervised Learning", "Optimization" ]
https://openreview.net/pdf?id=HkXKUTVFl
https://openreview.net/forum?id=HkXKUTVFl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hk8rOYasx", "rkRXbMF9x", "SyPh7Dlse", "H1oZWpVsg" ], "note_type": [ "comment", "official_review", "official_review", "comment" ], "note_created": [ 1490028606067, 1488687397588, 1489167279206, 1489453314686 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper109/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper109/AnonReviewer1" ], [ "~Justin_Gilmer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Interesting work\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This paper provides a constructive proof to indicate that Direct Feedback\\nAlignment (DFA) can be viewed as a noisy variant of a layer-wise training method called Linear Aligned Feedback Systems (LAFS). It also empirically verified that the random update matrices used in DFA are effectively readout matrices.\\n\\nThis work is interesting because it explained why DFA works and what is the limitation of DFA.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper argues that Direct Feedback Alignment (DFA) has a very similar update rule to Linear Aligned Feedback Systems (LAFTS) which is a layer-wise training method. Part of the argument is through similarities in the update equation and they perform experiments to verify that the different part in the update rule behaves similarly in DFA and LAFST. Unfortunately, since I'm not familiar with previous work on methods similar to DFA, I don't understand why they are useful in general (I'm not convinced that being biologically plausible is an acceptable reason for ML research).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Adding additional related work, and other uses of DFA\", \"comment\": \"Thank you for your review! Based upon your feedback, we have now updated our abstract to include further background on the usefulness of DFA and related techniques. In particular, DFA allows for parallelized updates to weight matrices which means that it could yield greater computational efficiency than traditional BP. We have also expanded our discussion on related works to include additional work on biologically plausible machine learning.\"}" ] }
rJzabxSFg
Intelligent synapses for multi-task and transfer learning
[ "Ben Poole*", "Friedemann Zenke*", "Surya Ganguli" ]
Deep learning has led to remarkable advances when applied to problems in which the data distribution does not change over the course of learning. In stark contrast, biological neural networks exhibit continual learning, solve a diversity of tasks simultaneously, and have no clear separation between training and evaluation phase. Furthermore, synapses in biological neurons are not simply real-valued scalars, but possess complex molecular machinery that enable non-trivial learning dynamics. In this study, we take a first step toward bringing this biological complexity into artificial neural networks. We introduce intelligent synapses that are capable of accumulating information over time, and exploiting this information to efficiently protect old memories from being overwritten as new problems are learned. We apply our framework to learning sequences of related classification problems, and show that it dramatically reduces catastrophic forgetting while maintaining computational efficiency.
[ "intelligent synapses", "information", "learning intelligent synapses", "deep learning", "advances", "problems", "data distribution", "course", "stark contrast", "biological neural networks" ]
https://openreview.net/pdf?id=rJzabxSFg
https://openreview.net/forum?id=rJzabxSFg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "H1jP_Fajl", "ryCeA7Bse", "ByOCI7gsx" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028643133, 1489481205878, 1489151696284 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper163/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper163/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Nice idea, suitable for workshop track\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors introduce a penalty term that encourages neuron stability.\\nThe idea is clear and the experiments illustrate the point.\\n\\nThough not groundbreaking, I do like the paper and would recommend it for publication.\", \"given_the_following_will_be_done\": [\"Please discuss the exact computational cost of the method in absolute numbers.\", \"An analysis of the fraction of weights receiving almost no updates on the given tasks. It would be interesting to see, if any surprising observations can be made here.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Nice work, what about larger datasets?\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This submission presents a technique which allows to train neural networks subsequently on a series of task without loosing too much performance on the tasks which have already been trained. The methods estimates the importance of synapses in training by keeping track of the gradient of the loss with respect to the synapse: large gradients indicate that the synapse is important for the task and therefore this synapse will be prevented from changing too much when training for subsequent tasks.\\n\\nI think the method provides a very nice approach to multi-task training and is worth presenting at ICLR. I am however missing an analysis on more state-of-the-art datasets like ImageNet or at least TinyImageNet since a lot of interesting methods fail to work (or get too expensive) on large datasets.\\n\\nOn first reading the paper I was surprised that the gradient would serve as a good estimate for the importance of the synapse since it should be approximately zero at the end of the optimization (for comparison, fisher information is related to the hessian matrix). But since the authors are integrating the gradient over the course of optimization, a large value should mean that the gradient used to be large in the optimization and is small at the end, which one might take as a second order statement.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1mw7K7Ke
Annealed Generative Adversarial Networks
[ "Arash Mehrjou", "Saeed Saremi" ]
Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by heating the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed beta-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.
[ "Deep learning", "Unsupervised Learning" ]
https://openreview.net/pdf?id=H1mw7K7Ke
https://openreview.net/forum?id=H1mw7K7Ke
ICLR.cc/2017/workshop
2017
{ "note_id": [ "By3DMk8oe", "BJZTVFrjg", "SyioDzo5g", "BygVgyIse", "By363mwjg", "B1IVoa8ox", "rkgXdYpje", "HyO3zXAcl" ], "note_type": [ "comment", "official_review", "official_review", "comment", "comment", "official_comment", "comment", "comment" ], "note_created": [ 1489527395577, 1489503417272, 1488820130878, 1489526823591, 1489611971832, 1489586990327, 1490028567861, 1489019568443 ], "note_signatures": [ [ "~Saeed_Saremi1" ], [ "ICLR.cc/2017/workshop/paper45/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper45/AnonReviewer2" ], [ "~Saeed_Saremi1" ], [ "~Arash_Mehrjou1" ], [ "ICLR.cc/2017/workshop/paper45/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Arash_Mehrjou1" ] ], "structured_content_str": [ "{\"title\": \"revisions\", \"comment\": \"We added a discussion section with new references, expanding the response below. We removed Figure 1 for space.\"}", "{\"title\": \"relation and comparisons to other work missing\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This work proposes a new method of stabilizing GAN training and encouraging the generator to cover the data distribution better (i.e. less mode dropping). As I understand it they do so by gradually annealing the data distribution so that it initially has high entropy and gradually move towards the true distribution.\\n\\nWhile this approach is potentially promising, the paper in its current form lacks and discussion of its relation to other approaches to stabilizing GANs. I think this method needs to be placed in context to be better understood.\\n\\nFinally, I have one question: in algorithm 1, the generator network is written as a function of beta, but as I understand it only the data distribution is annealed, not the generator distribution? Please explain.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Fails to address the relevant prior literature\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors present a heuristic for improving the stability of training GANs by annealing the data distribution. With the exception that they do not add noise to samples from the generator (though I could be mistaken) the method seems identical to the instance noise heuristic proposed by Sonderby et al (http://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/) and analyzed formally in Arjovsky and Bottou (2017), section 3. The authors do not discuss this connection anywhere. The paper also seems to be missing a discussion section. If these omissions were fixed and the contribution beyond instance noise were explained then this paper could be considered for acceptance, but not in its current state.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your comments. We added the discussion section to the paper clarifying our formulation in relation to other related works that have addressed stabilization of GAN training.\", \"regarding_the_question\": \"Both $\\\\theta_D$ and $\\\\theta_G$ depend on beta implicitly, since the generator and the discriminator are trained while the data distribution is being annealed. The notation in Algorithm 1 emphasizes this implicit dependence.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We appreciate your comments and acknowledgments of the differences of our method with \\u201cinstance noise\\u201d in that we only annealed the data distribution. We agree that in the current formulation, there is a clear correspondence between inverse beta (temperature) and the noise level. But we strongly believe that having \\\\emph{additive} noise is not crucial here, in that any quasi-static process can be used in this framework. This is in contrast to the theory developed by Arjovsky and Bottou where noise must be additive. We did not have enough space in the extended abstract to elaborate on the theory and provide more experiments. This would be treated in the extended version. We like to emphasise again that we developed this algorithm independently of \\u201cinstance noise\\u201d. Finally, we did not think of AIS reference as unnecessary -- the inspiration for annealing inverse temperature geometrically (instead of annealing noise linearly) was from that paper.\"}", "{\"title\": \"Still seems to be some confusion over noise vs annealing\", \"comment\": \"Thanks for adding the requested revisions to the paper. There still seems to be some confusion over terminology however. You say that annealing is different from adding noise - yet the equation where you define the heated data distribution is just that of a Gaussian mixture model with centers at the data, which you would sample by picking elements of the dataset and...adding noise. How are you even able to sample in the infinite temperature limit? You say that temperature is more general than noise variance. In other contexts that may be true, but in your application here the temperature is literally *identical* to the variance of a Gaussian. They are one and the same. And the reference to annealed importance sampling seems wholly superfluous - you aren't computing an estimate of a partition function or running an MCMC chain of any sort.\\n\\nI sympathize with your desire to differentiate your method from instance noise. \\\"Adding noise\\\" sounds like a hack. \\\"Annealing\\\" sounds like you are forging a sword of Damascene steel. But in this context, if I understand your paper correctly, they are one and the same. It seems to me that the only distinction here is that you add the noise only to the data, and reduce the variance of the noise as training progresses. That's not nothing. But it is not fair to try to present it as somehow dramatically different from other approaches.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks for your comments and the pointers to the literature. We approached this problem from a principled perspective, rooted in statistical mechanics and were not aware of the heuristics mentioned in the blog post and in the paper \\u201cAmortised MAP Inference for Image Super-resolution\\u201d. Regarding the first comment, we would like to emphasize that we did not add any noise to samples from the generator. Unfortunately, there is no space for comprehensive literature review in a three-page abstract, but we will briefly outline some important distinctions below, and expand on them in the long version of the work.\\n\\n\\u2013 We should emphasize that temperature is a more general concept than noise variance. We start the training at beta=0 (infinite temperature), where all data distributions become uniform distribution and GAN stability is achieved in a unified framework for all f-divergences.\\n\\u2013 In Arjovsky et. al. (2017) noise parameters were treated as hyper-parameters related to the distance between data distribution and generative distribution. This is not needed in our framework, where we start from infinite temperature. \\n\\u2013 In Sonderby et. al. (2016) the noise was annealed in a linear way. We think it is more physical to anneal the inverse temperature in a geometric fashion. This intuition is rooted in physics and in the classic work \\u201cAnnealed Importance Sampling\\u201d by Radford Neal (2001). This last minor issue could become important for large datasets.\"}" ] }
rydQ6CEKl
A Note on Deep Variational Models for Unsupervised Clustering
[ "Rui Shu", "James Brofos", "Curtis Langlotz" ]
Recently, the Gaussian Mixture Variational Autoencoder (GMVAE) has been introduced to handle unsupervised clustering (Dilokthanakul et al., 2016). However, the existing formulation requires the introduction of the free bits term into the objective function in order to overcome the effects of the uniform prior imposed on the latent categorical variable. By considering our choice of generative and inference models, we propose a simple variation on the GMVAE that performs well empirically without modifying the variational objective function.
[ "Deep learning", "Unsupervised Learning" ]
https://openreview.net/pdf?id=rydQ6CEKl
https://openreview.net/forum?id=rydQ6CEKl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HJcCFwloe", "Sk1LdtTsx", "BJv1QWbol" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489168849806, 1490028614769, 1489208030678 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper125/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper125/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"The authors expose an intriguing aspect of the behavior of (GM)VAEs\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper provides an analysis of different VAE architectures with both Gaussian and categorical latent variables from the perspective of unsupervised clustering.\\nThe authors propose a simple modification of the GMVAE model which removes the need of fiddling with the ELBO in order to make the model use the discrete latent variables.\\nThe authors demonstrate empirically (on MNIST) that small differences in the model's architecture can have a substantial impact on how the discrete latent variables are used as measured by classification accuracy and conditional entropy.\\n\\nNovelty\\n\\nThe contribution of this paper is in noticing that a relatively small change to the architecture of the GMVAE model allows it to more efficiently use the discrete latent variables while retaining a principled loss function.\\n\\nClarity\\n\\nThe different architectures explored, the experiment performed on MNIST and the experimental results are clearly explained. \\nThe analysis/discussion of model properties lacks some clarity. In particular, the analysis in section 3 offers little insight into the problem.\\n\\nSignificance\\n\\nWhile the authors expose an intriguing aspect of the behavior of (GM)VAEs with discrete latent variables, the effect of these properties in semi-supervised learning is only conjectural at this point as no experiments were performed to analyse that.\\nMore experiments are needed to assess the relevance of the observations made in this work.\\nAlso note that, while having an interesting effect on how the model interacts with the discrete latent variables, the resulting ELBO of the proposed GMVAE is worse than that of M2.\\n\\nQuality\\n\\nThe paper is ok overall. The text could be improved by focusing on the experimental results and on less speculative analysis of the model properties.\\nThe strength of this paper would substantially improve if the authors had performed semi-supervised classification experiments.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Review: A Note on Deep Variational Models for Unsupervised Clustering\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors seek to analyze (deep) discrete latent variable models\\nwith variational inference---with specific focus on \\\"unsupervised\\nclustering\\\". However, the work lacks attribution to related work in\\nthis problem, offer a nonstandard baseline, and do not discuss\\nscalability concerns of their proposal.\\n\\nThere is no discussion of any relevant work about (deep) discrete\\nlatent variable models for clustering. This makes it difficult to\\nunderstand what insights to glean from the paper. For example,\\nRanganath et al. (2015) have applied deep exponential families with\\nvariational inference for learning mixed memberships in text\\ncorpora. Johnson et al. (2016) have studied Gaussian mixture models\\nwith neural networks. There are other works in the area of topic\\nmodels and deep learning (e.g., Chien and Lee, 2017)---which get at\\neven more complicated (mixed membership) structures than the single\\nassignment in Gaussian mixtures. It would be useful to mention, if not\\ncompare to, at least one model/inference that has been applied to this\\ndomain.\\n\\nFollowing the above, the authors analyze a flaw in a model no one has\\nactually used for unsupervised clustering leveraging neural nets.\\nComparing to, e.g., Ranganath et al. (2015), would be more sensible if\\nthe aim is to better understand the differences in models. Comparing\\nto, e.g., Johnson et al. (2016), would be more sensible if the aim is\\nto better understand the differences in inference.\\n\\nRegarding the algorithm itself, the authors don't mention the scalability\\nissues present in the original GMVAE paper\\u2014as also described by the\\nreviewers (https://openreview.net/forum?id=SJx7Jrtgl&noteId=SJx7Jrtgl).\\n\\nMinor remarks\\n\\n+ I would recommend against the use of \\\"variational models\\\" in the\\n title. This work analyzes specific extensions of the variational\\n auto-encoder. It is not an interchangeable term with variational\\n models, which are about rich posterior approximations (e.g., Tran et\\n al., 2016).\\n\\nReferences\\n\\nChien, Jen-Tzung, and Chao-Hsi Lee. \\\"Deep Unfolding for Topic Models.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2017).\\n\\nJohnson, M. J., Duvenaud, D., Wiltschko, A. B., Datta, S. R., & Adams, R. P. (2016). Composing graphical models with neural networks for structured representations and fast inference. In Neural Information Processing Systems.\\n\\nRanganath, R., Tang, L., Charlin, L., & Blei, D. M. (2015). Deep Exponential Families. In Artificial Intelligence and Statistics.\\n\\nTran, D., Ranganath, R., & Blei, D. M. (2016). The Variational Gaussian Process. In International Conference on Learning Representations.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1VWyySKx
Char2Wav: End-to-End Speech Synthesis
[ "Jose Sotelo", "Soroush Mehri", "Kundan Kumar", "Joao Felipe Santos", "Kyle Kastner", "Aaron Courville", "Yoshua Bengio" ]
We present Char2Wav, an end-to-end model for speech synthesis. Char2Wav has two components: a reader and a neural vocoder. The reader is an encoder-decoder model with attention. The encoder is a bidirectional recurrent neural network that accepts text or phonemes as inputs, while the decoder is a recurrent neural network (RNN) with attention that produces vocoder acoustic features. Neural vocoder refers to a conditional extension of SampleRNN which generates raw waveform samples from intermediate representations. Unlike traditional models for speech synthesis, Char2Wav learns to produce audio directly from text.
[ "Speech", "Deep learning", "Applications" ]
https://openreview.net/pdf?id=B1VWyySKx
https://openreview.net/forum?id=B1VWyySKx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rJJjwqgie", "H1x82RNog", "BkhI7Clix", "ryeL_t6ox", "B1JnhKlsx" ], "note_type": [ "official_review", "comment", "comment", "comment", "official_review" ], "note_created": [ 1489180566569, 1489460296015, 1489195860185, 1490028616418, 1489177766948 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper128/AnonReviewer2" ], [ "~Jose_Sotelo1" ], [ "~Jose_Sotelo1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper128/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"TTS!\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper combines several ideas (seq2seq+attention, SampleRNN) to do TTS!\\n\\nThis reviewer is a little bit lost following the equations, especially where did \\\\phi come from (Eqn 6)? I know there is a space constraint, but more details are needed for the full paper version.\\n\\nThe model requires pre-training the reader and neural vocoder separately -- would be more impressive if this was trained end-to-end from scratch.\\n\\nOverall -- this paper is still good progress towards end-to-end TTS. I feel the title is still a bit misleading since the model isn't truely trained end-to-end... but the term end2end seems so overloaded in our community now...\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thank you for your comments!\", \"comment\": \"Thank you for your comments and for your review!\\n\\nSince we did not want to mislead the public about generated samples, we did not add samples of the neural vocoder using true vocoder features. However, we added them after reading your review as this might help judge/attribute where the different artifacts are coming from. The section is called:\\n'Neural Vocoder using ground-truth vocoder parameters'.\\n\\nAs you mention in your comment, samplernn adds some kind of background noise when generating the audio. We are now working on ways to solve this issue.\\n\\n(A personal note: I thought that wavenets produced this kind of artifact as well. I checked their samples some time ago and it was clear to me that they were also having this issue. I listened to them just right now and I couldn't hear this white-looking noise anymore. It's still a bit there, but not as much as I remember. Maybe they solved this in some way (reducing the temperature when sampling or some other thing) or maybe my memory is biased.\\nHere's an example of their samples:\", \"https\": \"//storage.googleapis.com/deepmind-media/pixie/us-english/wavenet-1.wav )\\n\\nThis is a proof of concept work and we did not do a comprehensive hyperparameter optimization. We are currently working on this and hopefully results will be ready for the conference. Another thing to note is that we train on relatively small datasets (<10 h audio). So probably these results can be improved by using a larger dataset.\\n\\nThanks again for your review and we hope that this comments provided more information!\"}", "{\"title\": \"Thank you for your comments!\", \"comment\": \"First of all, thank you for reading our paper and for your comments.\\n\\nUnfortunately 3 pages it's a bit constraining, so we could not add as much details as we would have liked. We are working on a longer arxiv version that should clarify all the details.\\n\\nWe have received several comments regarding the end-to-end issue. The main reason why we went for vocoder processing is because of computational efficiency. Basically, it reduces the time dimension of the problem by 80. However, we agree that it would be even better if we can train without it and we are currently exploring alternatives to do that.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Proof of concept neural TTS\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes a neural TTS system in which the synthesis happens in two stages. First, vocoder features are predicted by an attention-based seq2seq network. Second, the Vocoder features are synthesized into speech using a hierarchical RNN. The approach combines recent developments in waveform generation (Wavenet, SampleRNN) and simplifies the extraction of vocoder features: a single attention-based RNN replaces a pipeline that typically used a duration prediction RNN that does the alignment and a acoustic features prediction RNN that upsamples the phoneme/character sequence according to the duration prediction network.\\n\\nThe provided samples demonstrate the feasibility of the approach. However, many artefacts are present. Unfortunately, error attribution (frontend/backend network) is difficult because most of the samples use vocoder output, and (please correct me it I am wrong), no samples are generated using the SampleRNN over ground-truth vocoder features.\\n1. Only Spanish samples use the SampleRNN output, for all other languages the network predicts vocoder features that are synthesized using the vocoder. Comparing the samples \\\"Reader over characters with vocoder output\\\" and \\\"Char2Wav\\\" shows that the SampleRNN introduces many artefacts and is in general inferior to the Vocoder.\\n2. The frontend network produces slightly better features when applied to phonemes that to characters. The durations produced are unnatural, however this should be fixable since other LSTM-based approaches such as [Zen et al STATISTICAL PARAMETRIC SPEECH SYNTHESIS USING DEEP NEURAL NETWORKS] are used in production systems.\\n\\nDespite the deficiencies, I recommend acceptance if only to show the feasibility of assembling a TTS system using powerful recurrent neural networks and relatively limited resources.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SkbJrSVFe
Neu0
[ "Karthik R", "Aman Achpal", "Vinayshekhar BK", "Anantharaman Palacode Narayana Iyer", "Channa Bankapur" ]
MU0 is a deterministic computer that can store data in memory, manipulate it using programs, enabling decision making. Neu0 is a neural computational core modeled around the same principles. We create an ensemble of Neural Networks capable of executing ARM code, and discuss generalizations of our framework. We showcase the advantage of our technique by correctly executing malformed instructions, and discuss efficient memory management techniques.
[ "Deep learning", "Supervised Learning", "Applications" ]
https://openreview.net/pdf?id=SkbJrSVFe
https://openreview.net/forum?id=SkbJrSVFe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkcBHLfjx", "BJbNuKTox", "H1eMsUkie", "SyfvkAgsx", "rJ1aimxig" ], "note_type": [ "comment", "comment", "official_review", "official_review", "comment" ], "note_created": [ 1489294657662, 1490028584666, 1489099528365, 1489194842430, 1489152951043 ], "note_signatures": [ [ "~Karthik_Radhakrishnan1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper76/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper76/AnonReviewer1" ], [ "~Karthik_Radhakrishnan1" ] ], "structured_content_str": [ "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your time and your kind review.\\n\\nWe are indeed working on further extensions and will be submitting a full paper shortly.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"nice work\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"Some minor points / comments / questions:\", \"\\\"As Neural Networks cannot directly model multiplicative interactions of their inputs\\\": Why not? Isn't that a standard gating mechanism?\", \"\\\"model parameters were instead obtained using the Normal Equation\\\": That is unclear to me. What is the \\\"Normal Equation\\\"? In the next line, there is \\\"X\\\", but you never define what \\\"X\\\" is. The same for \\\"R\\\" and \\\"C\\\".\", \"The section about the arithmetic unit is very much too short. I don't really understand how it works. E.g. how do you perform the loop with repeated additions? How do you decide when to stop? That could work for integer values, but how does that work for float values? And how do you define it in a way that it is differentiable? Or is the arithmetic unit not differentiable w.r.t. its inputs?\", \"The question about differentiability should also be answered for all the other variables / intermediate values.\", \"The PC, I guess that is the address of the ARM code? I guess your branch instructions are conditional jumps, which will conditionally set the PC? How do you do that? Esp., how can that be differentiable? Or is it not differentiable? If it is not differentiable, how can you do training of the controller?\", \"You also don't really explain how you train the controller.\"], \"cons\": [\"Some parts are unclear, very much too short.\", \"Experiment section very much too short.\"], \"pros\": [\"Very nice idea and work.\", \"Open source implementation.\", \"I really like the idea and the direction of this work but I think it needs some more improvements.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Short paper on a nice idea\", \"rating\": \"7: Good paper, accept\", \"review\": [\"The paper is (necessarily) short and includes few experiments. On the other hand:\", \"it is very well written and clear,\", \"the idea to simulate a true ARM architecture is very nice,\", \"having it correct bytecode and test this is interesting,\", \"great accompanying website to showcase the results.\", \"This is more than enough for a workshop acceptance, I hope these ideas will be developed further with more experiments and made into a full paper.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thank you for appreciating our idea!\", \"comment\": \"Thank you for the review!\\n\\nA large portion of the feedback stems from the fact that the sections detailing each architectural element are too short. Unfortunately so, given the stringent 3 page limit, it was impossible to fit in any more detail, and hopefully our responses are adequately able to address your questions. \\n \\n1. Since Feed-Forward Neural Networks compute a linear combination of their inputs (W1*x1 + W2*x2) followed by a nonlinearity, they cannot directly model multiplicative interactions. As an alternative, we chose to model multiplication as repeated addition, but we do agree that we could alternatively create an arbitrary computational graph with multiplication as a gate (with errors flowing back as a gradient switch) for this task.\\n\\n2. Normal equation is an analytical procedure used to obtain optimal parameters for a linear problem. If we define Y = W.X, we can find the optimal weight W by (X^-1)*Y. We used the pseudoinverse and found the parameter W. Here X contains our input samples (Each sample having the 2 operands) and Y is the expected result. Following Neural Programmer [1], we perform all 3 operations (Addition, subtraction and multiplication) and take the weighted sum based on the confidence predicted by the neural network where R is the vector with the result of all three operations and C is confidence vector. Taking a distribution over the three results allows the AU to learn which operation needs to be performed, and predict the confidence scores accordingly.\\n\\n3. The arithmetic unit also predicts a stopping value at each timestep. We stop when this value is 0. In this use case, we were able to model multiplication as repeated addition, as ARM does not support floating point numbers. However this would still work as long as the multiplier (Number of time steps) is a whole number. The AU itself was a feed-forward Neural Network, trained using the normal equations as described in 2. However, we could alternatively keep \\\"add\\\", \\\"sub\\\" and \\\"mul\\\" as primitive operations, as is done in Neural Programmer [1]. \\n\\n4. Similar to ARM, we have one register which acts as the PC. The controller reads the value of this register at each timestep and the corresponding instruction. It uses that as the encoding and updates the PC after every instruction. The reading and writing to PC happens exactly like the other registers as described in Neural Turing Machine [2].\\n\\n5. Our Controller is a One-Many LSTM. It uses the encoding to generate the list of steps to execute. The controller outputs 2 distributions at every time-step. One over all the smaller operations to perform and another over the operands. We trained the controller using Back-Propagation with the stack-trace generated for every instruction. The training is similar to the procedure followed by the Neural Program Interpreter [3]. We also introduced random noise in the gradient to make the training procedure more robust [4].\\n\\nWe are glad that you appreciated our idea and our work, and are mindful of the improvements that need to be made. Being an active area of research, we are taking steps in this direction. Some improvements that we have made over the months include using our system to execute recursive algorithms such as finding the factorial of the number, using our memory as a stack, with one of the registers acting as the stack pointer, and deploying the same architecture to generalize beyond ARM to MIPS as well (another RISC architecture.) Since Workshop papers favor late breaking developments and works in progress, we sought to develop on this nascent field of research by getting invaluable feedback through reviews, such as the one you have given, alongside fruitful discussions at the workshop. Unfortunately, due to lack of space we were unable to have an extensive results section within the paper, but we have made the full stack trace as well as the values of the registers and relevant memory locations at each timestep available on our GitHub website https://neu0.github.io\\n\\n\\n[1] Le, Q.V., Neelakantan, A., & Sutskever, I. (2015). Neural Programmer: Inducing Latent Programs with Gradient Descent. CoRR, abs/1511.04834.\\n\\n[2] Danihelka, I., Graves, A., & Wayne, G. (2014). Neural Turing Machines. CoRR, abs/1410.5401.\\n\\n[3] Freitas, N.D., & Reed, S.E. (2015). Neural Programmer-Interpreters. CoRR, abs/1511.06279.\\n\\n[4] Kaiser, L., Kurach, K., Le, Q.V., Martens, J., Neelakantan, A., Sutskever, I., & Vilnis, L. (2015). Adding Gradient Noise Improves Learning for Very Deep Networks. CoRR, abs/1511.06807.\"}" ] }
HkcpR04Yx
On the Limits of Learning Representations with Label-Based Supervision
[ "Jiaming Song", "Russell Stewart", "Shengjia Zhao", "Stefano Ermon" ]
Advances in neural network based classifiers have accelerated the progress of automatic representation learning. Since the emergence of AlexNet, every winning submission of the ImageNet challenge has employed end-to-end representation learning, and due to the utility of good representations for transfer learning, representation learning has become as an important and distinct task from supervised learning. At present, this distinction is inconsequential, as supervised methods are state-of-the-art in learning transferable representations, which are widely transferred to tasks such as evaluating the quality of generated samples. In this work, however, we demonstrate that supervised learning is limited in its capacity for representation learning. Based on an experimentally validated assumption, we show that the existence of a set of features will hinder the learning of additional features. We also show that the total incentive to learn features in supervised learning is bounded by the entropy of the labels. We hope that our analysis will provide a rigorous motivation for further exploration of other methods for learning robust and transferable representations.
[ "Theory", "Deep learning", "Transfer Learning" ]
https://openreview.net/pdf?id=HkcpR04Yx
https://openreview.net/forum?id=HkcpR04Yx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "H1fpSOA9e", "BkgU_Kpjl", "SymPJvSjg", "B1oBJJrog", "SkU5k5Jsx", "r1YutsNil", "S1twkq1ig" ], "note_type": [ "official_review", "comment", "official_review", "official_comment", "comment", "comment", "comment" ], "note_created": [ 1489040826540, 1490028615593, 1489493851020, 1489461058845, 1489112973946, 1489447280583, 1489112929220 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper127/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper127/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper127/AnonReviewer2" ], [ "~Jiaming_Song1" ], [ "~Jiaming_Song1" ], [ "~Jiaming_Song1" ] ], "structured_content_str": [ "{\"title\": \"Unclear goal and conclusion\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"[Paper summary]\\nThe paper investigates a phenomenon that the presence of a set of features for\\na task may hinder the learning of a new set of features for another task. The\\npaper uses the well-known conditional mutual information between the task label\\n(Y) and the features to quantify the difficulty of learning a new set of\\nfeatures. \\n\\n[Clarity, Novelty]\\nThe paper is not clearly written. I spent a lot of time guessing the main\\nmessage of the paper. If I understand correctly, the main message is \\\"The\\npresence of some features may make the learning of a new set of features more\\ndifficult.\\\" However, this is not at all emphasized. In fact, the introduction\", \"mentions_a_seemingly_different_claim\": \"\\\"generative models have greater potential\\nfor representation learning (compared to supervised learning).\\\" \\n\\nThe notation of the conditional mutual information is well known, as well as\\nits decomposition into the difference of two conditional entropy terms. The\\nlimits the novelty of the paper. The contribution from the paper is rather the\\nuse of this quantity to quantify the difficulty of learning new features, as\\nstated. However, experimental results do not seem to support this.\\n\\n[Major comments/questions]\\n\\n* The paper seems to be saying that the entropy of the task label H(Y) (in\\n Eq.2) is the limit of learning capacity (see the conclusion). I do not quite\\nunderstand why this should be the right interpretation. I see H(Y) as the total\\ninformation of the task label. The fact that the mutual information I(Y,\\nf_1,...,f_k) is bounded above by H(Y), to me, means that one cannot come up\\nwith features f_1,..,f_k to extract more information that the full amount\\ncontained in Y. But, it does not mean that there is a limit in learning\\ncapacity of an algorithm. It does mean that there is a limit to what a learner\\nhas to know to learn to predict Y.\\n\\n* Section 2.1: Experiment to show feature competition:\\nThe writing in this section is very difficult to follow. It might be better to\\nfirst state what the goal of the experiment is. The paper does mention \\\"feature\\ncompetition\\\" as the goal. But then, this term was not clearly defined.\\n\\n* Section 2.1: In Eq. 2, Y is Eq. 2 is a fixed task label random vector. In\\n Section 2.1, there are two phases, each phase using only one component of\\nY=(Y1, Y2). How do you justify this in the context of Eq. 2 which uses a fixed\\nY=(Y1, Y2)?\\n\\n* Section 2.1: It is unclear why you \\\"completely corrupt\\\" the left digit.\\n Please state the goal first.\\n\\n[Detailed suggestions]\\n\\n* Section 2: I (mutual information) is not defined. In fact, that it is the\\n conditional mutual information is not even mentioned. Eq 3 (difference of two\\nentropies) should be moved to Section 2 to match the text description there.\\n\\n* Section 2: \\\"But then at least some of those features must be very hard to\\n learn..\\\" I could not follow the reasoning behind this at all.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Point being made is neither terribly surprising nor well argued.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"If I'm understanding correctly, the argument is essentially that labels convey at most log K bits given K classes -- a common refrain from proponents of unsupervised pretraining about 10 years ago, and therefore not a very novel contribution. The paper does a poor job of explaining how the experiments performed support this informal hypothesis.\\n\\nDue to both the lack of novelty and the ineffectiveness at communicating the points being made, I do not believe this is appropriate for the workshop track. I'd encourage the authors to try and flush out their arguments in slightly longer form and post it on arXiv, with a focus on clearly leading the reader from premise to experimental basis to conclusion.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"read the response & additional comments/questions\", \"comment\": \"Thanks for your response. Here are some additional stimulating questions whose answers might be useful for your future versions.\\n\\n1. What happens when none of the features contains any information for the prediction task?\\n2. What is the relationship of your \\\"feature competition\\\" framework to the concept of sufficient statistics? Is it correct that when you say it is \\\"difficult\\\" to learn new features, you actually mean the current set of features is already almost sufficient? In your context, the sufficient statistics will be for the conditional distribution p(Y|X) where Y=label, X=input.\"}", "{\"title\": \"Responses to individual questions\", \"comment\": \"=== Responses to Major Comments ===\", \"q\": \"\\\"But then at least some of those features must be very hard to learn..\\\" I could not follow the reasoning behind this at all.\", \"a\": \"According to the Pigeonhole principle, if the sum of \\\"signal\\\" is bounded by H(Y), and there is k features, then at least some of the feature will have \\\"signal\\\" less than H(Y)/k. If k is large, then the \\\"signal\\\" is small and under our assumption these features will be hard to learn. We will clarify this, as well as the implications of Equation 2.\"}", "{\"title\": \"Updated paper\", \"comment\": \"We updated the paper according to comments of AnonReviewer2. Namely, we moved the argument for GANs to the appendix, and spent more space on the clarity of the supervised learning argument and the experiments.\", \"the_structure_of_the_updated_paper_is_as_follows\": \"1. We propose an assumption that relates \\\"the learnability\\\" of a feature to its \\\"signal\\\", which is the conditional mutual information.\\n2. If the assumption holds, then \\\"the feature competition\\\" phenomenon and \\\"an upper bound for the total incentive to learn features\\\" would exist in supervised feature learning.\\n3. We validate our assumption through an experiment, which suggests a high correlation between \\\"the learnability\\\" of a feature and its \\\"signal\\\".\"}", "{\"title\": \"clarifications on problem, contributions, and experiments\", \"comment\": \"Thanks for your comments.\\n\\nYour summary is correct - we study when \\\"the presence of a set of features for a task may hinder the learning of additional features\\\", which we call\\\"feature competition\\\".\\n\\nThe claim for generative models to have greater potential for representation learning (compared to supervised learning) is an additional conclusion that we felt is worth noting. We will revise our writing to emphasize the main message on supervised feature learning in the main article, and move the discussion about generative models entirely to the appendix.\\n\\n\\n=== Novelty ===\\n\\nIntuitively, let us consider a dataset with images of \\\"cats and dogs\\\". Our questions are: if we train using supervised methods to distinguish cats vs dogs, will we learn all the features that are related to cats and dogs? Will it learn additional features, such as a \\u201ctable\\u201d feature, even if it appears in the input data but is only slightly correlated with cats?\\n\\nWe informally use the term \\\"learnability\\\" to indicate the degree of incentive to learn features. The intuition is that features that add more predictive power to the current model (over a supervised task) have higher incentives to be learned.\\n\\n1. We propose an assumption that relates \\\"the learnability\\\" of a feature to its \\\"signal\\\", which is the conditional mutual information.\\n\\n2. If the assumption holds, then \\\"the feature competition\\\" phenomenon and \\\"an upper bound for the total incentive to learn features\\\" would exist in supervised feature learning.\\n\\nImagine learning features one by one. The existence of a set of features will decrease the conditional mutual information between a new feature and the label, and reduce the \\\"signal\\\", hence the \\\"learnability\\\". If the \\\"signal\\\" for a particular feature is small (or even worse, zero), then the model would receive little benefit in predicting the label correctly, hence it is unlikely for the model to learn this feature over others.\\n\\nEquation 2 states that the sum of \\\"signal\\\" is bounded by H(Y), which sets an upper bound to the total incentive to learn features in supervised learning, under our assumption. Once that upper bound is reached, there will be no incentive for learn additional features.\\n\\nIntuitively, if we have an \\\"eye\\\" feature that already allows us to discriminate cats vs dogs perfectly, there would be no incentive to learn an additional \\\"mouth\\\" feature, even though it is also highly related to the current task. Learning additional features that not directly related to the task, such as \\\"tables\\\", would be even more difficult.\\n\\n3. We validate our assumption through an experiment, which suggests a high correlation between \\\"the learnability\\\" of a feature and its \\\"signal\\\".\", \"the_experiment_process_is_as_follows\": \"1) We have two phases, \\\"feature extraction\\\" and \\\"feature evaluation\\\" using a \\u201cleft digit\\u201d and a \\u201cright digit\\u201d as inputs in both phases.\\n2) In \\\"feature extraction\\\", we train (left digit, right digit) --> (left label) to obtain features.\\nWe manually control the conditional mutual information (\\\"signal\\\") between (right input, left label) with two mechanisms. \\nArtificially Induce correlation between (left label, right label), and thus also between (right digit, left label), which will increase the \\u201csignal.\\u201d\\nCorrupting the left input by some probability, which will increase the signal. (left input becomes less informative about left label)\\n\\n3) In \\\"feature evaluation\\\", we measure the quality of features corresponding to the right input learned in \\u201cfeature extraction\\u201d, by training a one-layer network (left feature, right feature) --> (right label). We use \\\"test accuracy\\\" as a means to measure the overall quality of the features learned. (\\\"learnability\\\")\", \"an_graphical_explanation_to_the_experiment_can_be_seen_in_http\": \"//tsong.me/public/img/blog/competition.png\\n\\nIn Figure 1, we see a strong correlation between \\\"signal\\\" and \\\"test accuracy\\\", which suggests that features with higher \\\"signal\\\" will have higher incentive to be learned.\", \"some_corner_cases_in_figure_1\": \"\", \"bottom_row\": \"The right label (hence also right input) has no correlation with the left label, hence there is no signal to learn features from the right part, and would perform no better than random initialization of the weights.\", \"top_left_corner\": \"The right input has high correlation with the \\u201cleft label\\u201d while the \\u201cleft input\\u201d doesn\\u2019t (because of added noise), this has the highest signal, since this is essentially assigning the left label to the right input.\", \"top_right_corner\": \"Both inputs have high correlation to the left label. Due to feature competition, relatively few features corresponding to the right input are learned.\\n\\n4. The properties of conditional mutual information are indeed well known, and we do not claim any contributions there. The contribution is, identifying the relationship between this quantity and the difficulty of learning new features under the assumptions made.\"}" ] }
ryBDyehOl
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
[ "George Tucker", "Andriy Mnih", "Chris J. Maddison", "Jascha Sohl-Dickstein" ]
Learning in models with discrete latent variables is challenging due to high variance gradient estimators. Generally, approaches have relied on control variates to reduce the variance of the REINFORCE estimator. Recent work (Jang et al. 2016, Maddison et al. 2016) has taken a different approach, introducing a continuous relaxation of discrete variables to produce low-variance, but biased, gradient estimates. In this work, we combine the two approaches through a novel control variate that produces low-variance, unbiased gradient estimates. We present encouraging preliminary results on a toy problem and on learning sigmoid belief networks.
[ "Unsupervised Learning", "Reinforcement Learning", "Optimization" ]
https://openreview.net/pdf?id=ryBDyehOl
https://openreview.net/forum?id=ryBDyehOl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "S1-Gutpsg", "rJWCnSLig", "B1fpfvQse" ], "note_type": [ "comment", "official_review", "official_review" ], "note_created": [ 1490028553403, 1489554633301, 1489363642193 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper14/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper14/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Introduces new control variate for REINFORCE gradient\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper introduces a new gradient estimator for discrete variables. The estimator is based on a conditionally marginalized control variate scheme. The control variate is based on a combination of the REINFORCE estimator and the recently proposed Concrete distribution, a continuous relaxation of the discrete disribution.\\n\\nAn important improvement upon the estimator of the Concrete distribution, is the unbiasedness. The authors show in experiments that theirs indeed result in better solutions. Unfortunately, it is not reported what the performance is at convergence.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Solid idea for variance reduction in latent variable models\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper introduces a way of training models with discrete latent variables using a low-variance unbiased stochastic estimate of the true gradient. The trick is to take the recently proposed Gumbel/softmax (or concrete) distribution, but use it to construct a control variate for the stochastic gradient.\\n\\nOverall, I find the idea both clever and convincing. Since the concrete distribution was already shown in previous work to be directly convertible into a discrete model, it ought to give a good approximation to the gradient for the discrete model. Therefore, the stochasticity in the REINFORCE gradient for the continuous model ought to be a strong control variate for the REINFORCE gradient in the discrete one. \\n\\nThe experiments show that REBAR is able to reduce the variance of the gradient estimates by almost an order of magnitude on binary MNIST (still a challenging benchmark for discrete latent variable models). The method still appears to underperform the concrete distribution itself; I expect with a bit more effort, one should be able to find cases where the discrete model pays off. But the idea seems sound, and overall I think this is a strong workshop paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Bkv9FyHYx
Joint Training of Ratings and Reviews with Recurrent Recommender Networks
[ "Chao-Yuan Wu", "Amr Ahmed", "Alex Beutel", "Alexander J. Smola" ]
Accurate modeling of ratings and text reviews is at the core of successful recommender systems. While neural networks have been remarkably successful in modeling images and natural language, they have been largely unexplored in recommender system research. In this paper, we provide a neural network model that combines ratings, reviews, and temporal patterns to learn highly accurate recommendations. We co-train for prediction on both numerical ratings and natural language reviews, as well as using a recurrent architecture to capture the dynamic components of users' and items' states. We demonstrate that incorporating text reviews and temporal dynamic gives state-of-the-art results over the IMDb dataset.
[ "ratings", "reviews", "text reviews", "joint training", "recurrent recommender networks", "modeling", "core", "successful recommender systems", "neural networks" ]
https://openreview.net/pdf?id=Bkv9FyHYx
https://openreview.net/forum?id=Bkv9FyHYx
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Bkw0-Yljx", "By5UdtTox", "SyII9Uyog" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489174991243, 1490028625934, 1489099342025 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper141/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper141/AnonReviewer2" ] ], "structured_content_str": [ "{\"rating\": \"7: Good paper, accept\", \"review\": \"The idea of jointly training a model to predict reviews and ratings is a nice one, and while it has been studied elsewhere a number of times, the main extension here is to use an LSTM RNN based model rather than previous work that has made use of topic models etc.\\n\\nPositively, the idea is interesting and the comparisons to traditional work on matrix factorization are promising. The authors also consider issues of temporal evolution in the ratings, beating competitive baselines like TimeSVD++.\\n\\nNegatively, it's not clear whether LSTMs are really beneficial here over \\\"traditional\\\" rating+text methods. No quantitative comparison to another rating+text method is provided, though many are discussed. Text prediction is discussed in terms of perplexity, but again this is not quantitatively or qualitatively compared to alternatives.\\n\\nOverall though, I think the paper is worth accepting due to the overall promise of the idea, even if the experiments are not quite convincing yet.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"review\": \"This paper proposed a joint model for rate prediction and text generation. The author compared the methods on a more realistic time based split setting, which requires \\u201cpredict into the future.\\u201d The model is interesting and sound, and I think it should be accepted to ICLR workshop\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1U4mhVFe
Embracing Data Abundance
[ "Ondrej Bajgar", "Rudolf Kadlec and Jan Kleindienst" ]
There is a practically unlimited amount of natural language data available. Still, recent work in text comprehension has focused on datasets which are small relative to current computing possibilities. This article is making a case for the community to move to larger data and is offering the BookTest dataset as a step in that direction.
[ "Transfer Learning", "Semi-Supervised Learning", "Natural language processing", "Deep learning" ]
https://openreview.net/pdf?id=H1U4mhVFe
https://openreview.net/forum?id=H1U4mhVFe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "ByBhY_Big", "HkkBuFaix", "SJny8qSjx", "SJf0b5Hsl" ], "note_type": [ "official_review", "comment", "comment", "official_review" ], "note_created": [ 1489500588866, 1490028598902, 1489507812117, 1489506761701 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper100/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Rudolf_Kadlec1" ], [ "ICLR.cc/2017/workshop/paper100/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Introduces a large dataset for training machine comprehension models\", \"rating\": \"7: Good paper, accept\", \"review\": \"Contributions:\\n\\nThis paper introduces BookTest, a new and large training data for machine comprehension. While CBT has ~200K examples, this dataset has ~14M examples. Authors also take the AS Reader model and show the performance gain on CBT test set when trained with BookTest training data. Additional human study shows that humans can answer more than 50% of the questions which the model fails to answer which means there is room for improvement.\", \"novelty\": \"Adding more training data helps. There is nothing novel in this. However, authors provide a bigger dataset to train machine comprehension model which I think is valuable.\", \"clarity\": \"Paper is extremely well written (given the page constraints).\", \"significance\": \"This is a significant contribution since I believe that this data can be used to pre-train any machine comprehension model.\", \"pros\": [\"new dataset which is very large and useful\", \"well written paper\"], \"cons\": [\"authors could have benchmarked using more models (than just AS Reader)\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"minor clarification\", \"comment\": \"Thank you for your review.\\nWe would like to correct your observation that \\\"human performance is actually below that of some of the models\\\".\\nWe did human evaluation only on a subset of examples that the machine learning model (ASReader trained on BookTest) could not answer correctly. Therefore you can't directly compare numbers from Table 2 and Table 3. \\nSince we tested humans on examples that were too difficult for the machine learning model human performance on the whole dataset would be almost certainly even better.\"}", "{\"title\": \"review\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper introduces the BookTest dataset, which is a huge dataset of reading comprehension questions compared to existing datasets. While the paper does not contain any new methodological contributions, the dataset has potential to be extremely valuable to the research community, and as such I hope it is accepted to the workshop. The authors perform a human evaluation to show that, unlike the CNN dataset, there is room for models to improve on this larger dataset.\\n\\nWith that said, it is interesting that the human performance is actually below that of some of the models; I wonder what types of questions humans are answering incorrectly, especially since human performance on the questions incorrectly answered by ASReader is relatively high. It would be great if the authors could add some discussion on this in future versions of the paper. If the types of questions machines struggle with are different than those humans struggle with, this could potentially impact the sorts of architectural decisions made by future researchers to improve accuracy on this task.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HyhbYrGYe
Regularizing Neural Networks by Penalizing Confident Output Distributions
[ "Gabriel Pereyra", "George Tucker", "Jan Chorowski", "Lukasz Kaiser", "Geoffrey Hinton" ]
We propose regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. We connect our confidence penalty to label smoothing through the direction of the KL divergence between networks output distribution and the uniform distribution. We exhaustively evaluate our proposed confidence penalty and label smoothing (uniform and unigram) on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and our confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyper-parameters.
[ "neural networks", "confidence penalty", "confident output distributions", "label smoothing", "exploration", "reinforcement learning", "strong regularizer", "supervised learning", "direction" ]
https://openreview.net/pdf?id=HyhbYrGYe
https://openreview.net/forum?id=HyhbYrGYe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "rkTFDWA5e", "Hk_xWjtoe", "B1lYzOFpse", "Hykbw3gjx" ], "note_type": [ "official_review", "comment", "comment", "official_review" ], "note_created": [ 1489012613489, 1489772784140, 1490028561443, 1489188599553 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper32/AnonReviewer1" ], [ "~George_Tucker1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper32/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Another softmax smoothing technique\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes to add a regularizing term to the objective function which penalizes the estimation of distributions with small entropy. It is one of these small tricks that were tried out by various groups even though only few people mention it in publications because the changes in performance are small and the additional hyperparameter makes it unattractive. This is also reflected in the paper here, as the improvements are very small compared to the baselines and usually vanish if more care is taking w.r.t. traditional regularization approaches.\", \"further_remarks\": [\"The evaluation is done on a broad spectrum of tasks, but the selections of the baseline systems is questionable. Especially on WSJ, there is no good reason to take an attention based seq2seq model but not also a network trained in a hybrid fashion or with CTC. Especially the CTC experiment would have been of great interest since the criterion tends to favor sharp probabilities.\", \"A theoretical perspective on the convergence is not well established and a proper justification why this is should be able to improve neural network training is missing (except for the norms of the gradients on MNIST). If the argument is that gradients saturate too quickly if probabilities go too high then I would like to see an experiment with the squared error criterion as comparison, where this effect is not that large.\", \"In total I appreciate the work and broad evaluation but would suggest to include this method in a larger comparison paper that describes several of these tricks. The paper is well written and certainly correct, and the required scope is clearly limited within a workshop. Yet I would like to see some of the points here addressed before recommending acceptance.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"RE: Review\", \"comment\": \"Thank you for the careful review and helpful feedback.\\n\\n1) Yes, we agree that in some cases, more competitive baselines exist. There was a tradeoff in implementing state-of-the-art baselines with all of the bells and whistles and trying the technique across multiple domains and different model architectures. For this workshop submission, we decided it would of more interest to focus on broadly evaluating the technique, but we see the merit of the other approach too. \\n\\nPreliminary results of label smoothing with the CTC objective yielded a small improvement when no language model was used. We smoothed all non-blank tokens at all timesteps using an auxiliary cost function. However, note that unlike the seq2seq networks that directly output next-token predictions, CTC comes with its own loss function and it is not obvious how to best apply the smoothing - Do you force a smooth distribution of the non-blank tokens? Do you smooth the blank? Do you extract alignments and only smooth the emission locations, or you indiscriminately smooth all locations? Furthermore, CTC needs a language model for optimal decoding. This will need to be tuned together with smoothing. Exploring CTC and label smoothing is thus an interesting topic, but may be of more interest to a speech focused venue, and treating it thoroughly exceeds the 3-page limitation of the workshop.\\n\\n2) We agree that the theoretical justification is mostly speculative (our primary contribution is the extensive empirical evaluation), however, as you suggest, we hypothesize that it is due to gradients saturating. We notice that once an example is correctly predicted, the only way to increase the log likelihood is to make the prediction sharper. Hence the outputs become uncalibrated and unless capacity is controlled in some way, the network will put very sharp distributions on its predictions. These regularizers prevent this behavior. Moreover, the Inception paper notes that the gradient vanishes on confident correctly predicted samples. This does not happen with label smoothing and the confidence penalty, which means that we do get some training signal for the lower layers even on correct predictions. This may improve data efficiency when doing multiple epochs (normally during the later passes only the few samples that the net doesn\\u2019t get confidently have any influence). This is similar to the trick from Yann LeCun\\u2019s \\u201cEfficient Backprop\\u201d in which the hyperbolic tangent was used on the output of the net, but it was scaled to have a range of [-1.1, 1.1], while the targets were +-1. Thus the net was never driven into saturation. \\n\\nCan you clarify the experiment you're suggesting to test this? Do you mean a regression task or classification with sigmoid outputs and L2 loss?\\n\\nA second point is that for misclassified examples, the network can get large gradients which may slow training. The confidence penalty would encourage the model to place mass on all classes, which would reduce the norm of these gradients, which is confirmed in the plot of the gradient norm. \\n\\nLastly, we can interpret our approach as a regularizer encouraging the predictive distribution to be close to uniform. So, when the model has little evidence it is regularized to the uniform, however, when sufficient evidence is accumulated, the predictive distribution matches the data.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Thorough evaluation of smoothing techniques, interesting introduction of confidence penalization\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes using confidence as a term for regularization in neural networks, helping to prevent overfitting by penalizing overly confident predictions. The experiments range across a number of fields and architectures, helping to show both the generality of the technique and where it appears to be most helpful.\\n\\nThe work and experiments are rather detailed and exhaustive, especially when delving in to the Appendix for specific details of the various experiments. The confidence penalty regularization is compared to and combined with dropout and label smoothing. I do agree with another reviewer that some of the baseline systems are weaker than others. Seeing an LSTM used without recurrent dropout (variational dropout, zoneout, ...) as a baseline for language modeling is unfortunate for example. Even with that acknowledged, the results and analysis over a variety of datasets is enough to convince me of the capability of confidence penalization as a regularization technique.\\n\\nOverall, I think the paper makes a good contribution to the knowledge and application of various smoothing techniques and introduces the benefits of confidence penalization as a competing and/or complementary regularization technique. The paper is clearly written and detailed in the number and variety of experiments performed.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Hk8-lkHKe
Changing Model Behavior at Test-time Using Reinforcement Learning
[ "Augustus Odena", "Dieterich Lawson", "Christopher Olah" ]
Machine learning models are often used at test-time subject to constraints and trade-offs not present at training-time. For example, a computer vision model operating on an embedded device may need to perform real-time inference, or a translation model operating on a cell phone may wish to bound its average compute time in order to be power-efficient. In this work we describe a mixture-of-experts model and show how to change its test-time resource-usage on a per-input basis using reinforcement learning. We test our method on a small MNIST-based example.
[ "Reinforcement Learning", "Deep learning" ]
https://openreview.net/pdf?id=Hk8-lkHKe
https://openreview.net/forum?id=Hk8-lkHKe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkK5Hfwie", "HyPXqgJjx", "Hkf8utpix", "SJ_uVm1jx" ], "note_type": [ "comment", "official_review", "comment", "official_review" ], "note_created": [ 1489606032869, 1489074718886, 1490028617944, 1489085552290 ], "note_signatures": [ [ "~Dieterich_Lawson1" ], [ "ICLR.cc/2017/workshop/paper130/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper130/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Figure 2 helps answer this question.\", \"comment\": \"Thank you for your review!\\n\\nI think your question is answered by figure 2. You can interpret the left and right edges of the green curve as a situation where you are training for the worst case or best case, respectively. At the left end, the green curve represents always using the most resource-constrained model (worst case) and at the right end it represents always using the most resource-hungry model (best case). It seems that if you know ahead of time that you need to operate exclusively in the worst case setting then it is better to train a model specifically for that task, as indicated by the green line crossing over the blue. The result is similar for the best case scenario although the performance gain is not as clear in that case. The main benefit of our model comes from the in-between cases when the model can choose per-instance how much of its 'budget' to expend.\"}", "{\"title\": \"An interesting paper\", \"rating\": \"7: Good paper, accept\", \"review\": \"The idea of the paper is to use RL to determine the resources used at testing phase. This paper is well-motivated and interesting, and the experimental results is promising. I think it is a good paper to be in the workshop track of ICLR.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Moving computation tradeoffs to test-time\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"A cool improvement on previous partial computation works that allows to change amount of computation at test time rather than during training.\\n\\nOne downside is that empirical comparisons with previous approaches are lacking. Is it better to train for the worst case (previous methods) or to train for the general case with a test-time precision knob? (knob that controls computational resource allocation).\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rk7YG_4Yg
THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES
[ "Stefan Carlsson", "Hossein Azizpour", "Ali Razavian", "Josephine Sullivan", "Kevin Smith" ]
We give a procedure for explicitely computing the complete preimage of activities of a layer in a rectifier network with fully connected layers, from knowledge of the weights in the network. The most general characterization of preimages is as piecewise linear manifolds in the input space with possibly multiple branches. This work therefore complements previous demonstrations of preimages obtained by heuristic optimization and regularization algorithms Mahendran & Vedaldi (2015; 2016) We are presently empirically evaluating the procedure and it’s ability to extract complete preimages as well as the general structure of preimage manifolds. ICLR 2017 CONFRENCE TRACK SUBMISSION: https://openreview.net/forum?id=HJcLcw9xg&noteId=HJcLcw9xg
[ "preimage", "rectifier network activities", "procedure", "preimages", "complete preimage", "activities", "layer", "rectifier network", "layers", "knowledge" ]
https://openreview.net/pdf?id=rk7YG_4Yg
https://openreview.net/forum?id=rk7YG_4Yg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Sy8YhvHix", "ByrNOYpjl", "rybaDvrog", "Skqhfclig", "Sks_Z7xoe" ], "note_type": [ "comment", "comment", "comment", "official_review", "official_review" ], "note_created": [ 1489497213664, 1490028589468, 1489495993348, 1489179313620, 1489150323302 ], "note_signatures": [ [ "~stefan_carlsson1" ], [ "ICLR.cc/2017/pcs" ], [ "~stefan_carlsson1" ], [ "ICLR.cc/2017/workshop/paper84/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper84/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"The main purpose was to get a good geometric understanding of preimage manifolds\", \"comment\": \"We are aware of the dual basis and we also have a purely algebraic proof for the preimage construction which if time permits we will work into the paper. The main point with the derivation that we present is to get the geometric picture of the preimage, i.e. the relation of the preimage to the defining hyperplanes of the network.\\n\\nWe are thankful for the comments on typos etc. by the reviewer and they have been corrected in an updated version\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"List of answers to objections\", \"comment\": \"* The discussion is quite narrow, assuming the same number of input and output units and that the weight matrix is regular.\\n\\nThe purpose of the paper is to understand the contribution of the non linear ReLU function to the pre image . The case of rank deficiency adds the nullspace of the mapping to the pre image in a very standard way\\n\\n*The biases are not adequately discussed\\n\\nWe don't know what is lacking about the biases. They do not appear in the basis computation since they only depend on the orientation of the hyperplanes\\n\\n*The situation for a network with multiple layers is not further discussed. \\n\\nThe situation for a network with multiple layers is demonstrated by the picture in figure 2.\\nThe preimage will in the depicted case consist of piecewise linear domains in the input space\\n\\n* The terminology is not used adequately, confusing notions such as kernel, nullspace, linear vs affine, etc. The discussion is confusing, with the reader having to guess often what could be meant. For instance, quantifiers are missing or wrong. The assumptions are not properly and timely stated, for instance regarding the number of outputs, the biases, the invertibility of the weight matrix, and so on. The typesetting is very poor and contains many typos. \\n\\nThe only specific well defined objection here is about number of inputs and outputs \\nWe clearly state that we are considering a fully connected network with a full rank transformation between layers.\"}", "{\"title\": \"review of ``THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES''\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": [\"The paper discusses a parametrization of the set of inputs that a layer of ReLUs maps to a given output value.\", \"PROS.\", \"The question about the preimages of ReLU networks is quite interesting.\", \"Parametrizing the preimages of a layer of ReLUs is a neat idea.\", \"The contribution is not very substantial in its current form, but it seems that with more time it could evolve further.\", \"CONS.\", \"The focus of the workshop is on late-breaking developments, very novel ideas and position papers. While the paper at hand includes a neat observation, I hardly regard it as a late-breaking development, very novel idea, or a position.\", \"The discussion is quite narrow, assuming the same number of input and output units and that the weight matrix is regular. The biases are not adequately discussed. The situation for a network with multiple layers is not further discussed.\", \"The practical implementation and computational aspects are not further discussed.\", \"The terminology is not used adequately, confusing notions such as kernel, nullspace, linear vs affine, etc. The discussion is confusing, with the reader having to guess often what could be meant. For instance, quantifiers are missing or wrong. The assumptions are not properly and timely stated, for instance regarding the number of outputs, the biases, the invertibility of the weight matrix, and so on. The typesetting is very poor and contains many typos.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting paper, can be streamlined\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper studies the question: what is the set of points in the input space of a linear + relu layer that map to a given point in output space? This question has been studied before in an empirical manner (for multi-layer networks), but this paper is the first to give a simple linear algebraic answer.\\n\\nThe central construction of the paper is a basis e_i that has the property e_i^T w_j = delta(i,j): that is, e_i is orthogonal to all weight vectors w_j except w_i. Although it is not often taught in linear algebra courses, this is a well known concept called the dual basis of w. Since the paper uses quite a lot of space to explain the idea, it may be more efficient to simply state that every basis has a unique dual basis (with said property) and then continue to work with it, providing a reference to the reader who is interested in learning more about the concept and how one might compute a dual basis efficiently.\\n\\nThe analysis seems to be mostly correct for the case where W is a square, invertible matrix. This and other assumptions should be clearly stated. If W is not square or not invertible, many complicated phenomena may occur (https://en.wikipedia.org/wiki/Arrangement_of_hyperplanes).\\n\\nSince it is immediately clear that the pre-image of a relu net must be piecewise linear, the value of this paper is in a precise characterization of this linear manifold. To do this thoroughly, we cannot ignore the fact that e.g. not all layers in a neural network have the same dimensionality (although the full rank assumption may be reasonable for dimension-increasing layers).\\n\\nOne issue I'm slightly uncertain about is the role of the bias, which is sometimes ignored in the paper. Looking at the equation w_j^T x = ... on page 2, I think the constraints on alpha_i should involve b_i.\\n\\nThe question of preimages of multi-layer networks is treated very briefly at the end. I think the paper could be much more significant if you used some of the space used to explain the dual basis to go into greater depth on this question. E.g. how can we parameterize the piecewise linear manifold of a multi-layer net, given some values for the weights of the network?\", \"i_think_the_equation_for_pi_i_on_page_1_should_read\": \"Pi_i = {x : w_i^T x + b_i = 0} (i=1..n)\", \"the_paper_contains_many_typos_and_missing_periods\": \"explicitely\\nthe matrix W . and bias\\n\\\"sets of hyperplanes\\\" -> hyperplanes\\neasily using by just\\nintersection For\\n= 1 The\\na_j <= 0 By\\ne_i Any\\na_i >= 0 The\\nregion they will \\ncan therefore me considered\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ry8u21rtl
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
[ "Antti Tarvainen", "Harri Valpola" ]
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Mean Teacher achieves error rate 4.35% on SVHN with 250 labels, better than Temporal Ensembling does with 1000 labels.
[ "Computer vision", "Deep learning", "Semi-Supervised Learning" ]
https://openreview.net/pdf?id=ry8u21rtl
https://openreview.net/forum?id=ry8u21rtl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SJbFLvgix", "HkPCjmBil", "HyzbBfTG-", "H1JKvtgil", "SypIuYpjl" ], "note_type": [ "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489167992618, 1489480654553, 1497339130340, 1489176439405, 1490028629180 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper145/AnonReviewer2" ], [ "~Antti_Tarvainen1" ], [ "~Antti_Tarvainen1" ], [ "ICLR.cc/2017/workshop/paper145/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"Simple, interesting semi-supervised training technique\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes a simple variation on the recently proposed \\\"temporal ensembling\\\" method for semi-supervised learning. Temporal ensembling smoothes label predictions (using the same model on unlabeled data) to produce additional labeled data. Instead, the authors propose to smooth the model weights during training (exponential moving average) to yield a \\\"teacher network\\\" which can be used to generate new labeled data in an online way.\\n\\nThe approach is well motivated. The main advantage over temporal ensembling is that this method admits online training vs temporal ensembling which only yields new labels once per epoch. The experimental results seem believable, and indicate an improvement over temporal ensembling.\\n\\nOverall, it appears to be a simple and interesting twist on temporal ensembling which yields better results in the semi-supervised setting.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Thank you for the reviews\", \"comment\": \"Answers to questions:\\n\\n1. We chose to use batch norm instead of weight norm because we had limited time and our codebase didn't easily support weight norm at that point. We are going to run the experiments with weight norm too, when we have time from our other engagements.\\n\\n2. Validation set was used for confirming our selection of the EMA alpha hyperparameter (0.999 was the first value we tried and turned out to be close to optimal). For the comparability of results, we chose to reuse other hyperparameters from Temporal Ensembling paper. We did perform a hyperparameter search for these other hyperparameters as well, which confirmed that the choice of hyperparameters was reasonable, although not perfectly optimal. The validation set was also used to inspect the inner workings of the model, which affected our intuition and choice of experiments.\\n\\nWe will update the paper when we have the weight norm results. We may also change the title of the paper and the name of the method to something that is easier to digest.\\n\\nThank you for your comments.\"}", "{\"title\": \"Paper updated\", \"comment\": \"Also, source code is now available at https://github.com/CuriousAI/mean-teacher\", \"the_poster_shown_at_iclr_2017_is_here\": \"https://github.com/CuriousAI/mean-teacher/blob/master/ICLR_2017_poster.pdf\"}", "{\"title\": \"Interesting extension of the temporal ensembling\", \"rating\": \"7: Good paper, accept\", \"review\": \"The paper proposes a new way to do semi-supervised learning. It takes inspiration from the temporal ensembling work. But instead of having an exponential moving average of predictions computed every epoch, and using those as labels/targets, this method use a moving average of the weights of the network and form a teach net that gives labels. This essentially extends the temporal ensembling method to work in an online fashion.\\n\\nThis is a simple and potentially powerful way to do semi-supervised learning. I am curious why the authors used batch norm instead of weight normalization. It's unclear to me what the validation set was used for (presumably hyper-parameter selection). It'd be nice to see some exploration for \\\\alpha and its effects. Otherwise, the SVHN results look pretty compelling.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}" ] }
HkGjCUEte
HiNet : Hierarchical Classification with Neural Network
[ "Zhenzhou Wu", "Sean Saito" ]
Traditionally, classifying large hierarchical labels with more than 10000 distinct traces can only be achieved with flatten labels. Although flatten labels is feasible, it misses the hierarchical information in the labels. Hierarchical models like HSVM by \cite{vural2004hierarchical} becomes impossible to train because of the sheer number of SVMs in the whole architecture. We developed a hierarchical architecture based on neural networks that is simple to train. Also, we derived an inference algorithm that can efficiently infer the MAP (maximum a posteriori) trace guaranteed by our theorems. Furthermore, the complexity of the model is only $O(n^2)$ compared to $O(n^h)$ in a flatten model, where $h$ is the height of the hierarchy.
[ "hierarchical classification", "flatten labels", "hinet", "neural network hinet", "neural network", "large hierarchical labels", "distinct traces", "feasible", "hierarchical information", "labels" ]
https://openreview.net/pdf?id=HkGjCUEte
https://openreview.net/forum?id=HkGjCUEte
ICLR.cc/2017/workshop
2017
{ "note_id": [ "B1MxZImoe", "HJRNyZ7ie", "Sy4bjKQox", "B1dhCcgjl", "ryf4_F6sx" ], "note_type": [ "official_review", "comment", "comment", "official_review", "comment" ], "note_created": [ 1489359081776, 1489338165696, 1489373948123, 1489182384136, 1490028586203 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper80/AnonReviewer2" ], [ "~Zhenzhou_Wu1" ], [ "~Zhenzhou_Wu1" ], [ "ICLR.cc/2017/workshop/paper80/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ] ], "structured_content_str": [ "{\"title\": \"hierarchical classification\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": [\"This paper investigates hierarchical classification using deep nets during learning and a greedy MAP procedure during inference.\", \"While this paper is trying to address an interesting subject, the paper is very confusing to read and needs a lot of improvement in explaining its content before it can get published.\", \"\\\"A combined cost allows travelling of information across levels which is equivalent to transfer learning between levels\\\". What happen to the results when the network architecture does allow shared cost function, but while keeping the downpour algorithm intact?\", \"The experimental section should briefly mention the task description and put the results into context w.r.t alternative methods (other than just the flatten network)\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Reply\", \"comment\": \"Hi Reviewer,\\n\\nI will like to make the explanation clearer, but due to the page limitation, we only presented the core algorithm. So to answer all your questions. I will like to illustrate the objective of the solution with an example. Typically in hierarchical labels, given an input X, we can have y1-y2-y3 of conditional labels whereby subsequent levels is conditioned on the previous level, for example, a department under a school, a school under a faculty, the length of the hierarchy can be different for different X. For large hierarchical labels with for example more than 10k possible traces. Traditionally, we can flatten the labels which then treats similar trace like a-b-c and a-b-d as totally different, or we can put a classifier at each decision node, but it will require a sheer number of classifiers and takes a very long time to train. Here, we propose a much simpler solution with neural network that models the hierarchy of the label and at the same time is super quick to train. Below are the answers to your questions. Let me know if I can help you to understand the paper better, because I don't want it to be missed because my reader don't understand the paper.\\n\\n[Question] I did not completely understand how the neural network captures the hierarchy of the structure and what each depth is presenting?\\n\\n[Answer] It's neural network. Each level of the network predicts one level of the hierarchy. Given two similar inputs X1 with a-b-c and X2 with a-b-d, since the networks predict each level independently, so a-b-c and a-b-d will be treated to be closer to each other in a HiNet than if you treat them as two different classes in a flattened label. In the network, we append an additional stop label to a trace. So the actual label for X1 is a-b-c-stop and X2 is a-b-d-stop. This is necessary because the traces can be of different length. Each depth of the network represents each level of the hierarchy label\\n \\n[Question] Does each nn depth correspond to a level in the hierarchy? How do the nodes indicate the predictions and which nodes are responsible for prediction?\\n\\n[Answer] Yes. Each node represent a label in the hierarchy. During inference, the label for the MAP trace return by the network will contain all the nodes that the MAP trace has passed through. For example, let's say we have a trace passes through the 2nd neuron in first level, 3rd neuron in second level, and 6th neuron in third level and finally it ends in stop neuron in the 4th level, then the returned label will be a2-b3-c6-stop.\\n\\n[Question] I did not understand how the neural network architecture is designed? Is it a feedforward as shown in the figures? How does it relate to the hierarchy of classes?\\n\\n[Answer] It is indeed a feedforward architecture. However one point that I'm afraid the reader may confuse is that they may treat the connection between layers as weights, no, the connections are not weights, the connections are just an indication that there is a relationship between a child neuron and a parent neuron.\\n\\nThere are two different paradigm for training and inference, during training as show in figure 2a, we want the model to predict the label at each level given the input X trained with a combined cost function. A combined cost function allows conditional information between levels to be transferred. \\nDuring inference, as shown by figure2b, after the model gives a probabilistic score of labels at different levels, the objective of the inference algorithm is to find the MAP trace which is p(a, b, c .. stop) > p(every other trace) which is derived using the downpour algorithm and the MAP trace derived by the downpour algorithm is theoretically guaranteed by the three theorems.\\n\\n[Question] The authors mention that training hierarchical SVM's are difficult. Can they mention in what sense are these networks difficult to train?\\n\\n[Answer] The training takes a super long time, because there are just sheer number of svms to train (>10000), we run a hsvm for more than a month and it's still running. Also the amount of memory it consumes is also amazing when we want to train them in parallel.\"}", "{\"title\": \"Reply\", \"comment\": \"[Question] While this paper is trying to address an interesting subject, the paper is very confusing to read and needs a lot of improvement in explaining its content before it can get published.\\n\\n[Answer] Thanks for the feedback, I will definitely try to improve the presentation so that it's easier to understand. However I hope to re-emphasize the contributions in the paper in case it get missed out because of the presentation.\\n\\n1. By simply appending a stop neuron at each level, we have a very simple neural network architecture that can model any kind of hierarchy and traces of any length. \\n2. It reduces space complexity from O(n^h) of a flatten classifier to O(h n^2) in HiNet where h is the maximum height of the hierarchy, and n is the average number of classes in one level. This may not be significant for small depth, but difference is huge if h is large. for h=10 and n=100, traditional flatten model will need 10^20 parameters while HiNet only requires 10^5. And compare HiNet to HSVM, the memory saving is even more significant, given that each SVM takes up O(number of support vectors ^2) which can be huge.\\n3. It reduces the inference time (downpour algorithm) also from O(n^h) to O(hn^2). The traditional way of finding the trace with the maximum probability is to look at all traces which is of order O(n^h), however downpour algorithm significantly reduce the time complexity to O(hn^2) by greedily calculating the MAP at each level, thinking a time reduction of (10^20 to 10^5 for h=10 and n=100).\\n4. In a few years time as we deal with large number of classes (typically hierarchical one), this neural network based model will have significant advantage over traditional node-based models.\\n\\n\\n[Question] \\\"A combined cost allows travelling of information across levels which is equivalent to transfer learning between levels\\\". What happen to the results when the network architecture does allow shared cost function, but while keeping the downpour algorithm intact? \\n\\n[Answer] Not too clear about your question. The combined cost is used during training, while the dourpour algorithm is used for inference after the model is trained. Our results have shown that using a combined cost improves the prediction accuracy for each individual level compared to a separate cost for each level. We think this may be due to the transfer of knowledge of the dependences between levels by the cost function \\n\\n[Question] The experimental section should briefly mention the task description and put the results into context w.r.t alternative methods (other than just the flatten network)\\n[Answer] Totally agree, we tried HSVM on a subset of features of DMOZ in order to speed up the training, but the results was pretty far off from both flatten and hiNet and that's why we didn't put it in. On the full feature which has about 80 dimensions, the training is impossible (thinking of the number of parameters in each svm which is proportional to the input dimension), takes a month and it's still running. We are also thinking of comparing them in a simpler hierarchy which has less than 100 unique traces. But since the workshop paper is more of presenting a very interesting idea rather than getting the full results, that's why we submitted here. I am humble to hear more feedbacks from you.\"}", "{\"title\": \"This paper proposes a neural network based approach for hierarchical classification\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"In this paper, the authors propose a hierarchical classification method using neural networks. Traditional techniques that consider the hierarchy while training are inefficient to train. Other techniques aim to solve this inefficiency by flattening the labels, which results in ignoring the hierarchical relationships between the labels.\\n\\nThe main problem with the paper is with its presentation. I did not completely understand how the neural network captures the hierarchy of the structure and what each depth is presenting? Does each nn depth correspond to a level in the hierarchy? How do the nodes indicate the predictions and which nodes are responsible for prediction?\\n\\nI did not understand how the neural network architecture is designed? Is it a feedforward as shown in the figures? How does it relate to the hierarchy of classes?\\n\\nThe authors mention that training hierarchical SVM's are difficult. Can they mention in what sense are these networks difficult to train?\\n\\nThere are some typos in the paper. For example:\\n - Page 1, Models such as hierarchical SVM become ...\\n - Section 2.2, Line 3 - The sentence is incomplete \\n - Page 3, We compare HiNet with a Flatten Network which has ...\\nand so on\\n\\nI suggest re-writing the paper to make it more clear and understandable.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}" ] }
BkL7bONFe
Joint Multimodal Learning with Deep Generative Models
[ "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo" ]
We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. Recently, some studies handle multiple modalities on deep generative models. However, these models typically assume that modalities are forced to have a conditioned relation, i.e., we can only generate modalities in one direction. To achieve our objective, we should extract a joint representation that captures high-level concepts among all modalities and through which we can exchange them bi-directionally. As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation. In other words, it models a joint distribution of modalities. Furthermore, to be able to generate missing modalities from the remaining modalities properly, we develop an additional method, JMVAE-kl, that is trained by reducing the divergence between JMVAE's encoder and prepared networks of respective modalities. Our experiments show that JMVAE can generate multiple modalities bi-directionally.
[ "modalities", "deep generative models", "multiple modalities", "jmvae", "models", "joint representation", "joint multimodal", "multimodal", "images", "texts" ]
https://openreview.net/pdf?id=BkL7bONFe
https://openreview.net/forum?id=BkL7bONFe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkwtuUXol", "SJEVOYpse", "rJiAuulsx" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489361023172, 1490028587869, 1489172690621 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper82/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper82/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"A sound model with good preliminary results\", \"rating\": \"7: Good paper, accept\", \"review\": \"Paper summary\\nThis paper proposes a VAE model for modeling multimodal data.\\nAdditionally, a KL-divergence term is added in order to encourage the model to\\nlearn a latent representation such that the same representation can be inferred\\nfrom any one modality alone. This makes it possible to use the inferred\\nrepresentation bidirectionally - i.e. to go from any one modality to another.\", \"pros\": [\"The models gets better or comparable log probs when compared to relevant baselines.\", \"The analysis of the learned representation is well presented.\"], \"cons\": \"- A comparison of JMVAE and JMVAE-kl is not made.\\n- A description of the network architecture is not given. Presumably this looks\\n like two pathways (one for each modality) fused together at the top (similar\\nto the ones in Ngiam et al. 2011) to make it easy to split the parameters into\\n\\\\theta_x and \\\\theta_w. Some description of this network should be added.\\n\\nMinor comments\\n- Please check the legend in Fig 1(b). It seems that the green dots should be \\\"Base\\\". Also some of the images seem to be repeated.\\n- Please reconsider the use of `w' to represent a modality. w is often\\n associated with the weights of a network so this makes it a little jarring to\\nread.\\n- \\\"then the inferred latent variable becomes incomplete and generated samples might collapse\\\" : not clear what this means. Please explain.\\n\\nOverall, the paper describes a sound model with good preliminary results.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"VAE for learning the joint density of images and image-attributes\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary\\n\\nThis paper investigates two versions of the VAE model in the context of learning the joint density of images and image-attributes. The authors introduce auxiliary conditional densities to help with the task of sampling images conditioned on text and vice-versa.\\n\\n\\nNovelty\\n\\nWhile images and text may seem like different things, from a mathematical perspective of modeling their joint densities they are not. The assumption that data comes as a pair (x, w) or as a single vector x makes no difference from the model's perspective.\\nFor instance, the authors propose a factorized structure p(x,w|z) = p(x|z)p(w|z). This assumption by itself is not novel as already in the VAE model pixels are assumed to be independent of each other given z. \\nThe main contribution of this work is in proposing the JMVAE-kl model, which introduces the uni-modal conditional densities p(z|x) and p(z|w) and a modified loss to make these densities close to the variational posterior q(z|x,w).\\n\\n\\nClarity\\n\\nAlthough the specific model architectures are not explained, the text is overall clear.\\n\\n\\nSignificance\\n\\nThe results shown in this paper are very specific. But this work introduces an alternative way of learning conditional VAEs worth of further investigations.\", \"minor_points\": \"It would be good to have confidence intervals for the numbers on Table 1.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJ5JPtg_x
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies
[ "Marwin Segler", "Mike Preuss", "Mark P. Waller" ]
Retrosynthesis is a technique to plan the chemical synthesis of organic molecules, for example drugs, agro- and fine chemicals. In retrosynthesis, a search tree is built by analysing molecules recursively and dissecting them into simpler molecular building blocks until one obtains a set of known building blocks. The search space is intractably large, and it is difficult to determine the value of retrosynthetic positions. Here, we propose to model retrosynthesis as a Markov Decision Process. In combination with a Deep Neural Network policy trained on 5.5 million reactions, Monte Carlo Tree Search (MCTS) can be used to evaluate positions. In exploratory studies, we demonstrate that MCTS with neural network policies outperforms the traditionally used best-first search with hand-coded heuristics.
[ "Deep learning", "Applications", "Games" ]
https://openreview.net/pdf?id=SJ5JPtg_x
https://openreview.net/forum?id=SJ5JPtg_x
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkaWOtajx", "H11KgGOil", "rkMFyG_ol", "rJeIZWrox", "S1t4rdlol" ], "note_type": [ "comment", "comment", "comment", "official_review", "official_review" ], "note_created": [ 1490028549408, 1489670262772, 1489670010205, 1489469768523, 1489171761089 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "~Marwin_Segler1" ], [ "~Marwin_Segler1" ], [ "ICLR.cc/2017/workshop/paper3/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper3/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for your review and your comments!\\nAs your concerns are in line with the other reviewer, we hope that you do not mind that we will keep this reply short. Please have a look at our other reply as well.\\n\\nWe have provided more details about the policy network in the text, and highlighted our previous publication, where the pure policy net is described in all details [Segler, Waller, Chem. Eur. J, (2017) DOI: 10.1002/chem.201605499 ]. In short, the molecules get encoded as ECFP4 fingerprints, which are then fed into a 5 layer Highway network, which in turn predicts the probability of the graph transformations.\\n\\nWe have also adapted the description of the training data in the manuscript. Our training data are 5.5 million published organic (and many inorganic and organometallic) reactions carried out in synthetic chemistry labs, taken from the Reaxys database. It does not contain metabolic pathways. \\n\\nIn the upcoming full paper we are investigating in detail where MCTS and BFS differ in performance. Initial empirical evidence suggests that for more complex molecules (with a much larger tree), MCTS finds solutions more often. Our random test set consists of molecules that have not been described in the literature before \\u2013 they were generated by sampling from a charRNN [1] trained on drug-like molecules. Probably the best way for evaluation would be a time-split approach: Train only on data published before year X, then evaluate on data published after year X.\\n\\nWe are also looking forward to discussions at ICLR, as there are several papers in both the main and the workshop track that address some of our remaining issues!\\n\\n[1] A. Graves, https://arxiv.org/abs/1308.0850\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your considerate review and questions!\\n\\nWe are happy to address your questions and concerns. \\nIn short, we represent molecules as Extended Connectivity Fingerprints (ECFP4), which are fed into a Highway Network with 5 layers and ELU nonlinearities. Due to the restricted space in the extended abstract, referred to our previous publication, which already describes the full details about the training data, molecular descriptors, the policy networks, and their architectures [Segler, Waller, Chem. Eur. J, (2017) DOI: 10.1002/chem.201605499] We have clarified this in the manuscript.\\n\\nYou are right about chemical reaction data, they are indeed hard to come by. There is unfortunately no \\u201creaction MNIST\\u201d. The reaction dataset we use stems from the Reaxys database, which is one of the most widely used databases by practicing organic chemists. The statement that it contains \\u201cthe complete published knowledge of chemistry\\u201d may seem bold if one considers the huge challenge of constructing common sense knowledge bases, e.g. for natural language understanding. However, organic chemical reactions are a narrow domain and straightforward to represent computationally as graph transformations. There are only a few million reactions published in the chemical literature. Therefore, it is actually feasible to construct a knowledge base that covers almost every reaction ever published. Nevertheless, we have removed the statement from the text to avoid confusion.\\n\\nThe best moves for a particular molecule are the reported reactions that have been used to make it. This is completely analogous to AlphaGo, where, given a board position, the actually played moves are predicted to the best. It is possible to have different winning moves/reactions for the same position/molecule both for Go and for us. This is possibly one reason why neither AlphaGo nor our system reaches a very high accuracy.\\n\\n\\nThe improvement of MCTS is more a philosophical one, as it allows estimating the \\u201cvalue\\u201d of a molecule, which is its synthesizability (not all reasonable looking molecules can be made, which is a huge problem for computer aided drug design). BFS by definition finds only one solution, and then stops. MCTS on the other hand usually finds several solutions, and will backpropagate rewards from different branches to lower game tree nodes. Even though MCTS eventually converges to one optimal branch, it is desirable to find not just one, but several alternative and robust routes, and MCTS more readily allows for that.\\n\\nIs MCTS is thus less useful for retrosynthesis than in Go? To some degree yes! Empirically, we found that BFS is usually on par with MCTS on smaller, \\u201ceasier\\u201d molecules, while MCTS usually finds better solutions for larger molecules (and thus larger search trees), where BFS sometimes struggles. We are currently investigating this in detail for the full paper. We would say what\\u2019s most useful about our findings is that the DNN policy (or DNN cost function for BFS) outperforms the hand-annotated heuristics regardless of the search method used.\\n\\nBFS is faster because it does less calculation, due to the lack of the rollout phase. This is also what (probably) makes MCTS stronger. The two main bottlenecks in our system is the neural network and the application of the transformations, the tree search itself is much faster than these two steps. Restricting wall-clock time and simply sampling from the policy network are indeed interesting experiments, which we will carry out for the full paper.\"}", "{\"title\": \"Pruning search complexity with a policy network in retrosynthesis search\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper gives an alternative to best-first search (\\\"BFS\\\"), plus heuristics, planning of molecules synthesis. It does so by training a policy network on ECFP4 (string encoding) representations of molecules, to predict which (sub)molecule to apply, with rewards +1 when the molecule to synthetize is complete, -1 if this branch of applications is complete and the synthesis failed, 0 otherwise. As far as I understood, it is trained in _retro_synthesis (decompose the molecule). This policy network is coupled to a search method (BFS of MCTS) both to reduce the branching factor, and to aggressively prune the validation of chemical rules (graph isomorphism).\\n\\nThe details about the policy model are almost inexistant. The run-time of MCTS seems more than twice longer than that of BFS, so, baring an explanation, it feels like those should be compared in wall clock time. The \\\"random\\\" test set selection may be problematic to ensure that the whole model is working properly: you may want to test in generalization, on end-result (==start in retrosynthesis) molecules that are unknown to the policy network.\\n\\n\\\"essentially the complete published knowledge of chemistry\\\":\\n1) Paywall :(\\n2) maybe you are talking about a specific subset of organic chemistry? Even then, I doubt this includes all metabolic pathways and/or protein foldings.\\n\\nThe rest of the paper seems reasonable.\\n \\nThis paper seems good enough for a workshop at ICLR, and it could spawn interesting discussions.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting application of reinforcement learning\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Chemical synthesis planning is a long-standing and important problem. While not the first to propose machine learning to model an 'expert chemist' for chemical synthesis, the particular reinforcement learning approach with MCTS in this work is novel as far as I know. However, I have a couple questions and concerns.\\n\\nPrimarily, not enough information is given about the training data or the policy network trained from it. In general, good chemistry data is hard to obtain; information tends to be scattered, in different formats, and of varying quality. So I'm skeptical about the claim that the data set contains \\\"essentially the complete published knowledge of chemistry.\\\" What are these reactions, how are they represented, and how do you determine the _best_ move for a particular molecule when training? For the policy network, what is the network architecture? These may have been published elsewhere, but more details should be provided here, especially for a machine learning audience.\\n\\nI don't completely understand the performance improvement due to the MCTS. It seems less useful in this application than in Go, because here whenever you reach a reward of Q(v)=1, the problem is solved and you stop. In the experiments, MCTS shows an improvement over BFS+NN, which is definitely interesting, but this doesn't account for the fact that BFS is more than twice as fast (due to searching the tree in order?). If the constraint was wall-clock time, would MCTS still outperform BFS? I also wondered whether the MCTS policy described in Equation 1 performs better than simply sampling from the policy network output.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJj2ZxHtl
Learning Algorithms for Active Learning
[ "Philip Bachman", "Alessandro Sordoni", "Adam Trischler" ]
We present a model that learns active learning algorithms via metalearning. For each metatask, our model jointly learns: a data representation, an item selection heuristic, and a one-shot classifier. Our model uses the item selection heuristic to construct a labeled support set for the one-shot classifier. Using metatasks based on the Omniglot and MovieLens datasets, we show that our model performs well in synthetic and practical settings.
[ "Deep learning", "Supervised Learning" ]
https://openreview.net/pdf?id=rJj2ZxHtl
https://openreview.net/forum?id=rJj2ZxHtl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJDET5Esx", "SJgHHxZoe", "SJeCU269l", "rJeNq3_jl", "Sy5wdtaig", "r1yZr4Lix", "H1_kgO1se", "SkIyJsesx" ], "note_type": [ "official_review", "comment", "official_review", "comment", "comment", "comment", "comment", "official_comment" ], "note_created": [ 1489444143422, 1489204536258, 1488991944022, 1489713703914, 1490028641577, 1489548535288, 1489104864449, 1489182429732 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper161/AnonReviewer2" ], [ "~Philip_Bachman1" ], [ "ICLR.cc/2017/workshop/paper161/AnonReviewer1" ], [ "~Philip_Bachman1" ], [ "ICLR.cc/2017/pcs" ], [ "~Philip_Bachman1" ], [ "~Philip_Bachman1" ], [ "ICLR.cc/2017/workshop/paper161/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper presents an architecture to learn an active learning procedure that can be applied to different tasks of the same domain. An example practical usage is preference elicitation for recommendation, in which one wants to learn the series of questions to ask to users to efficiently collect ratings for recommendation. The algorithm is based on the idea that a \\\"question\\\" is a rating/label of an item in a pre-specified support set. The support set depends on the task at hand. The overall algorithm is trained using policy gradient, and some experiments on Omniglot and MovieLens show the algorithm performs better than a few baselines.\\n\\nThe idea makes sense and the problem is interesting since there is no clear solution to the \\\"cold-start\\\" problem in the literature. The work is still fairly preliminary (for instance, a simple baseline would be to learn the representations offline using standard supervised learning and apply an existing active learning algorithm at test time), but it may be sufficient for acceptance to the workshop.\", \"remarks\": [\"It seems that the MovieLens experiments do not correspond to a realistic scenario and I am not sure of the conclusion of the experiments. The support set in these experiments contains only movies that were rated by the user (without the ratings). It may seem to make sense in lab experiments because we cannot ask the user to rate additional movies. In practice however, knowing what kind of movies the user rates is already informative of the user interests, even without the ratings (see e.g., Marlin et al. \\\"Collaborative Prediction and Ranking with Non-Random Missing Data\\\": people tend to rate items they like). Thus, the model in the experiments has access to more information that what would be available in practice. A more realistic evaluation would be to take the entire set of movies as support set, but to ignore the query if it is not a movie that the user rated.\", \"Related question: what is the bi-LSTM useful for in the encoder of the support set? I suppose this is used to encode individual items in the context of the whole support set. But once again, in the MovieLens/cold-start recommendation setting, I am not sure that it can really be used in practice.\", \"In Fig. 2 (c), it seems that much of the improvement compared to the baselines comes at the first query. Do the authors have an idea of the rule found by the model to make the first selection?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"a couple more updates\", \"comment\": \"Thanks for taking the time to read through our revisions and provide more feedback. Your understanding of our task/test setup for the MovieLens experiments is correct. In response to your comments regarding \\\"cold start\\\", we've updated our terminology to refer to the \\\"bootstrapping\\\" problem for recommendation systems (see, e.g. [1]). During internal discussions, we've used \\\"cold start\\\" to refer generally to both zero-shot and (very) few-shot settings (some prior work uses this sense [2]). The \\\"bootstrapping\\\" terminology is a more precise fit for our current work, and should help avoid confusion.\\n\\nIn regards to state-of-the-art results for Omniglot, we're unaware of existing published results for training on all available characters in a class. It's unclear whether you're most interested in results for discrimination among all character classes, or specifically for the 20-way k-shot setting (with k set to the max permitted by the data). The \\\"all-way\\\" discrimination problem should be quite challenging, but that's speculation on our part. More concretely, we're running tests in the 20-way and 40-way settings, using 5-shot and 10-shot learning. Here, the 20-way problem starts hitting diminishing returns. At similar points in training, performance differs only slightly between the 5 and 10-shot setting, despite doubling the labeled support set.\", \"results_for_a_class_balanced_mn_are_as_follows\": \"- 20-way, 5-shot MN : 98.4\\n- 20-way, 10-shot MN : 98.6\\n- 40-way, 5-shot MN : 96.7\\n- 40-way, 10-shot MN : 97.2\\nFor comparison, our model reaches 96.2 in the 2.5-shot setting (i.e. 50 label queries). We'll train our model for >50 label queries, and include these results for comparison (in a future update, as tests will take a while to run).\\n\\nNote that (i) our modified MN slightly outperforms the original results from Vinyals et al. (e.g. 94.3 vs 93.8 in the 20-way, 1-shot setting), and (ii) the recent MetaNetwork model in [3] beats the above results in the 20-way, 1-shot setting (at 97.0). Perhaps adapting our approach to work with the MetaNetwork architecture could improve our results.\\n\\nIn future work, we plan to scale our method to larger support sets, more classes, and more label queries. We also plan to further investigate the potential of imitation learning from classic active learning algorithms (for a sort of \\\"algorithm distillation\\\"), and to investigate a broader set of real-world application scenarios. The algorithm distillation perspective is particularly interesting to us, as it represents an extension of the notion of \\\"learning algorithms by example\\\" to settings in which existing hand-coded algorithms are often suboptimal, and non-trivial to design.\\n\\n[1] http://dl.acm.org/citation.cfm?id=1871734\\n[2] http://dl.acm.org/citation.cfm?id=2433451\\n[3] http://arxiv.org/abs/1703.00837\"}", "{\"title\": \"not enough details?\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The topic of this paper is to present a novel active learning mechanism\\nbased on reinforcement learning and one-shot learning, where active learning\\nis seen as a sequential decision process of selecting one example at a time\\nto be labeled and update the model accordingly to better select the next\\nexample.\\n\\nI did not quite understand the motivation example of the cold start problem\", \"in_recommendation_tasks\": \"it's only \\\"cold\\\" for the first movie, not afterward,\\nso how does it differ from normal recommendation approaches?\\n\\nMore importantly, I did not quite understand the precise proposed model:\\nthe figure helped but was not enough, and the text itself said \\\"a detailed\\ndescription... is beyond the scope of this extended abstract\\\". So the only\\nthing I got was the loss function and a very rough idea of the overall process.\\nHow does it differ from other active learning approaches for instance?\\n\\nFinally, experiments look good but are not compared to any other active\\nlearning approaches (apart from random selection which is rather naive).\\n\\nI understand this is a workshop submission and it's only 3 pages but if\\nI don't understand the main idea and the results, it's hard for me to give\\na good score.\\n\\n*improving score after revision.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Result Update\", \"comment\": \"We've updated our submission with better results on Omniglot. For the 20-way setting with 50 total label queries, we first train the model to query 20 labels, and then fine-tune the model for 50 queries. Both phases of training optimize a reward which measures improvement in prediction accuracy. While fine-tuning, we add an auxiliary reward which encourages a class-balanced selection policy.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"review response\", \"comment\": \"Thanks for the helpful suggestions. We've added some extra active learning baselines in the MovieLens setting (see updated Fig. 2(c)). We add a Gaussian Process baseline, which selects the next movie to label in proportion to the variance of the predictive posterior distribution over its rating, and an Entropy Sampling baseline, which trains a classifier on the ratings and selects movies in proportion to the prediction entropy. The base classifier for Entropy Sampling was our end-to-end model, with the selection module swapped for the ES heuristic. These baselines implement the standard \\\"uncertainty sampling\\\" heuristic for two different prediction models.\\n\\nNote that the movie embeddings used by our model and all baselines were pretrained using a standard matrix factorization approach, so our baselines do in fact \\\"learn the representations offline using standard supervised learning and apply an existing active learning algorithm at test time\\\". We forgot to mention this point in earlier revisions, but it's included now.\\n\\nIt's true that our MovieLens task is not totally realistic. However, we found that this form of proxy task is common in the active learning and metalearning literature. Any approach to dealing with missing data will imply its own particular assumptions, and we've opted for the simplest path. Note that biases from our restriction to movies with known ratings will affect both our model and the baselines.\\n\\nYour intuition about the biLSTM is correct. Reducing constraints imposed by our modeling assumptions, to extend our model to more realistic settings, would make an ideal topic for future work.\"}", "{\"title\": \"response to review\", \"comment\": \"Thanks for the feedback. We've uploaded a re-written draft that hopefully clarifies our motivations and the structure of our model. We agree that these were unclear in our initial submission. We'd appreciate if you read through our changes and provide additional feedback.\\n\\nMost notably, we supplement Figure 1 with an algorithmic description of our model and we more clearly distinguish our model from existing active learning methods. Briefly, in contrast to existing methods, our model learns its data representation, strategy for selecting items to label, and prediction function jointly, end-to-end. Prior methods largely rely on hand-crafted selection strategies, and scaling them to high-dimensional data remains a significant problem [1] that has previously been addressed with kernel or graph-based approaches rather than end-to-end learning [2, 3].\\n\\nWith respect to baselines, in the MovieLens setting we compare to a Popular-Entropy policy that has been shown to work well in the cold-start recommendation literature, as well as a Min-Max-Cosine policy of our own design. While the Min-Max-Cosine policy performs well on Omniglot (we provide additional Omniglot results in a full write-up of this work), it performs poorly on MovieLens. This suggests there may be some value in learning task-adapted active learning models end-to-end.\\n\\n[1] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. \\\"Deep Bayesian Active Learning with Image Data,\\\" arXiv, 2017.\\n[2] Alex Holub, Pietro Perona, and Michael C Burl. \\\"Entropy based active learning for object recognition,\\\" In Computer Vision and Pattern Recognition Workshops, 2008.\\n[3] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. \\\"Multi-class active learning for image classification,\\\" In Computer Vision and Pattern Recognition, 2009.\"}", "{\"title\": \"improving\", \"comment\": \"Thanks for quickly improving the document and answering my concerns.\\nThe document is now more clearly explaining the algorithm. It is much\\nimproved indeed.\\nRegarding the \\\"cold start\\\" problem, can you confirm what I think I now\", \"understood_of_the_setting\": \"for each user, you have access to potentially\\n50 ratings, but your algorithm selects one of them to ask for a label,\\nand with it makes 10 predictions; subsequently it selects a second rating,\\nand with both revisit its 10 predictions, etc, and that's what we observe\\nin fig 2c. Results are good, but I'm not sure this is the real definition\\nof a \\\"cold start\\\" since you do require the user to rate (even 1 movie) before\\nyou make any recommendation... I see it as a good result of \\\"low shot\\\"\\nbut not really \\\"cold start\\\", no? Also, what are the actual state-of-the-art\\non Omniglot if you had access to the full data? I suppose much better, but\\nit might be good as a reference point (to be precise: how many \\\"requests\\\"\\nare needed to reach the full performance?)\\nI'm improving my score accordingly (but it can still be improved!)\"}" ] }
B1_E8xrKe
The Effectiveness of Transfer Learning in Electronic Health Records Data
[ "Sebastien Dubois", "Nathanael Romano", "Kenneth Jung", "Nigam Shah", "and David C. Kale" ]
The application of machine learning to clinical data from Electronic Health Records is limited by the scarcity of meaningful labels. Here we present initial results on the application of transfer learning to this problem. We explore the transfer of knowledge from source tasks in which training labels are plentiful but of limited clinical value to more meaningful target tasks that have few labels.
[ "electronic health records", "effectiveness", "transfer learning", "application", "transfer", "machine learning", "clinical data", "scarcity", "meaningful labels", "initial results" ]
https://openreview.net/pdf?id=B1_E8xrKe
https://openreview.net/forum?id=B1_E8xrKe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hy8kkHVox", "B1TwOtasl", "BkDcqvQsx" ], "note_type": [ "official_review", "comment", "official_review" ], "note_created": [ 1489419998139, 1490028645516, 1489365647507 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper166/AnonReviewer1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper166/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Important Problem but Incomplete Evaluation\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper addresses the important problem of learning good representations for clinical medical data. The authors propose learning representations from less important tasks with abundant labels and using these to train better models for more important tasks. The exposition needs to be improved. The reader shouldn't have to read multiple pages to find out what the source task and target task are.\\n\\nThe authors compare the transfer learning neural network method to an L1-regularized logistic regression baseline. The presented approach uses neural networks and transfer learning. The empirical work is incomplete. As it stands, I don't see how the reader can tell how much of the improvement owes to transfer learning and how much owes to the use of neural networks.\", \"pros\": \"Important problems (learning representations of clinical time series, modeling disease progression/temporal phenotyping)\\nDecent motivation\", \"cons\": \"Incomplete evaluation - reader cannot make a useful conclusion \\nNo methodological contribution\\n\\nI would have liked to see this paper either provide a solid empirical result (e.g. some actionable insight, as might have been achieved with more thorough experiments). For a workshop, it's not necessary for a contribution to be enormous, but the conclusions (whatever they are) should clearly follow from the presented work. This work in progress doesn't yet meet the burden of proof.\", \"side_q\": \"How did you end up choosing 746 & 936 hidden nodes - did you grid search over every possible setting of nodes? \\nDid you verify that rx2med and med2rx add anything to each other?\\n\\nTypos / nits:\\n* \\\"they train use an autoencoder\\\" \\n* What is a \\\"semi-automated deterministic labeling function\\\"?\\n* \\\"to predict a source task with ubiquitous labels that\\\" - you predict the values of targets, you don't predict a task\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper investigates employing transfer of learnt features on electronic health records, where labels for certain diagnosis might be sparse or scarcely available. Transfer learning is proposed as a way for leverage the data available from different tasks/label classes to infer a target class: in this case, predicting phenotypes.\", \"pro\": \"Real & important problem in practice. Exposes a new application domain.\", \"cons\": \"1) Unclear to me: \\\"our phenotype labels are derived from diagnosis codes, which are included in our input\\\". Are the diagnosis codes given as input to the network? If so, \\\" we approximate ground truth phenotype labels using diagnostic code categories based on the HCUP Clinical Classification Software\\\" -- this suggests that the phenotypes are a direct mapping from the diagnostic code, so any other information is ignored?\\n2) Since it's not a widely used dataset in this community, the paper could benefit from the description how the input data looks like. (Details like: dimensionality of input/output) \\n3) It's not clear to me that the slight boost in performance is due to the split-brain architecture or just due to the fact that the representation (2 layers + logistic regression) is richer than the baseline (logistic regression). \\n4) Given the nature of the data, a more pertinent comparison probably would have been multi-task methods. \\n5) Task description and experimental setup need to be more clear.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SkkC41HYl
De novo drug design with deep generative models : an empirical study
[ "Mehdi cherti", "Balazs Kegl", "Akin kazakci" ]
We present an empirical study about the usage of RNN generative models for stochastic optimization in the context of de novo drug design. We study different kinds of architectures and we find models that can generate molecules with higher values than ones seen in the training set. Our results suggest that we can improve traditional stochastic optimizers, that rely on random perturbations or random sampling by using generative models trained on unlabeled data, to perform knowledge-driven optimization.
[ "Unsupervised Learning", "Optimization" ]
https://openreview.net/pdf?id=SkkC41HYl
https://openreview.net/forum?id=SkkC41HYl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "ByL8OYasg", "Hkex73xox", "ByuO1Ygjx", "Bk4JKSgsg", "rJ7kIm1og" ], "note_type": [ "comment", "official_review", "official_comment", "comment", "official_review" ], "note_created": [ 1490028621835, 1489187560303, 1489174383863, 1489160412168, 1489085914850 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper135/AnonReviewer2" ], [ "ICLR.cc/2017/workshop/paper135/AnonReviewer1" ], [ "~mehdi_cherti1" ], [ "ICLR.cc/2017/workshop/paper135/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"interesting experimental exploration, but limited contribution\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes using auto-encoder models to learn molecular representations and generate novel molecules. The inputs to the autoencoders are the smiles strings representing the molecular graphs. The paper explores various autoencoder architectures with convolutional/RNN encoders and RNN decoders, and two variants of RNNs: sequence to sequence and character-level. The networks are trained using the denoising criterion, as opposed to using the variational free energies as in Gomez-Bombarelli et al (2016b).\\n\\nThe experiment involves sweeping over many hyperparameters and scoring the generated molecules based on the metric used in Gomez-Bombarelli et al (2016a). I found the conclusion from the experiment is misleading, or perhaps less promising than what the authors state:\\n\\ni. as with other approaches in the literature, the generated molecules are *very often* invalid, and this work provides no attempt to address or quantify the scale of this problem. The trained model is probably not very useful if the selection operators have to reject most of the generated molecules, whether some of them might have higher scores. \\n\\nii. it's not clear from the comparison what is the major factor contributing to more high-score molecules: whether it's the training objective, the RNN/CNN or just luck in hyperparameter tuning? Do you reckon the variational approach of Gomez-Bombarelli et al can achieve similar results if similar encoders/decoders are used?\\n\\nOverall, this work provides an exploration of using RNN/CNN autoencoder architectures in modelling and generating molecules. The result is somewhat interesting but lacks justification or theoretical developments to be considered ground-/late-breaking.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"I have updated my review.\", \"comment\": \"Thank you for the quick clarification.\"}", "{\"title\": \"Answer\", \"comment\": \"We thank the reviewer for helpful comments. We updated the paper\\naccordingly by giving more details about the objective function we used.\\nRegarding your question about how we calculated LogP for our\", \"generative_models\": \"LogP is _not_ log likelihood,\\nit is rather the \\\"partition coefficient\\\", which is a common\\nmeasure of drug-likeness used in molecular informatics \\nliterature. See https://goo.gl/SmXbz0 for more details.\\nHence, it is perfectly fine to compare different models on different\\nsamples.\"}", "{\"title\": \"Good work, but somewhat incremental\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents an approach to generating novel small molecule structures (SMILES representations) with drug-like properties. Rather than generating random molecules using a set of hand-crafted rules (the space of possible small molecules is astronomical), the authors aim to learn common characteristics of drug-like molecules from a training set of known molecules. This is a reasonable approach to an important problem.\\n\\nPrevious work has proposed variational autoencoders for this application. Here, the authors explore other generative models. I think the proposed generative methods are a little messy, but not unreasonable.\\n\\nGenerated samples (molecules) are evaluated based on their drug-likeness. This is analogous to how realistic a generated image appears to a human observer, in that this objective is not explicitly optimized by the model. In experiments, the drug-likeness scores of the top candidates from the proposed approach are higher than those of the top candidates from the variational autoencoder. However, this improvement seems somewhat incremental; there is little justification for why this approach should be better, and surely some of the improvement is due to parameter tuning. \\n\\nAnother important evaluation metric is the diversity of the samples that are produced --- after all, the goal is to propose new molecule structures. However, this is difficult to measure. The authors show that the drug-likeness distribution of their samples has higher variance than the baseline, but this again is likely a result of tuning parameters.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ByH2gxrKl
Accelerating Eulerian Fluid Simulation With Convolutional Networks
[ "Jonathan Tompson", "Kristofer Schlachter", "Pablo Sprechmann", "Ken Perlin" ]
Efficient simulation of the Navier-Stokes equations for fluid flow is a long standing problem in applied mathematics, for which state-of-the-art methods require large compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large linear system with many free-parameters must be solved. We use a Convolutional Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained results are realistic and show good generalization properties.
[ "Deep learning", "Applications" ]
https://openreview.net/pdf?id=ByH2gxrKl
https://openreview.net/forum?id=ByH2gxrKl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "BJIv_KTix", "B17Hb4Isg", "H1EqNZDoe", "Skw9HWDog", "H1H0X5lse" ], "note_type": [ "comment", "official_review", "comment", "comment", "official_review" ], "note_created": [ 1490028637563, 1489547579472, 1489601676141, 1489601935488, 1489179596870 ], "note_signatures": [ [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper156/AnonReviewer2" ], [ "~Jonathan_Tompson1" ], [ "~Jonathan_Tompson1" ], [ "ICLR.cc/2017/workshop/paper156/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes a specially tailored convolutional neural network for the efficient simulation of incompressible fluid flow, achieving significant speedup compared to classic simulator and more realistic outputs compared to other data-driven approach. I think the use of objective function that \\u201cemphasizes the divergence of voxels on geometry boundaries\\u201d deserves more explanation and investigation. In addition, while the qualitative comparison of different methods in the appendix is intuitive and convincing, I think it will be very helpful to develop some quantitative evaluation metrics based on fluid dynamics theories.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clarifications\", \"comment\": \"Thank you so much for your detailed review. It's certainly encouraging that other members of the community find our evaluation results compelling. To answer your specific questions:\", \"question\": \"\\\"I think the use of objective function that \\u201cemphasizes the divergence of voxels on geometry boundaries\\u201d deserves more explanation and investigation.\\\"\\n\\nUnfortunately, we could not cover this in detail due to space limitations of the workshop format. However, we appreciate that the lack of detail here might lead to confusion and so we will add more detail to the next paper revision.\\n\\nThe use of this weighting function is in essence similar to class waiting to counter-act skewed label distributions when training classifiers. That is, the fluid-geometry boundary voxels represent a small fraction of the simulation domain, and so the divergence error of these voxels will be de-emphasized by (or a small fraction of) our total MSE loss. We therefore increase the linear weighting of these voxels to account for their low frequency in our training data. Additionally, large errors on slip-condition boundaries result in noticeable visual artifacts, while non-zero divergence residual for free-space voxels tend to result in either additional turbulence or numerical dissipation; both of which look like natural phenomena.\"}", "{\"title\": \"Clarifications\", \"comment\": \"Thank you for your review. We are grateful that you find our application of deep-learning to this problem domain to be interesting, and we hope the larger ICLR community will also find it novel. To answer your specific questions:\", \"question\": \"\\\"I did not understand what is hybrid about the proposed method? This is mentioned in Section 1, Page 2, last paragraph.\\\"\\n\\nWhat we mean is that we do not address the problem as an end-to-end system. We include a learned module into the exact solver, hence hybrid. However, we believe that it is very interesting to explore the use of end-to-end models in this problem. The following text is extracted from the full version of this paper, available on arxiv:\\n\\n\\\"Why not to use a ConvNet to learn an end-to-end mapping that predicts the velocity field at each time-step? The chaotic change of velocity between frames is highly unstable and easily affected by external forces and other factors. We argue that our proposed hybrid approach restricts the learning task to a stable projection step relieving the need of modeling the well understood advection and external body forces. The proposed method takes advantage of the understanding and modeling power of classic approaches, supporting enhancing tools such as vorticity confinement. Having said this, end-to-end models are conceptually simpler and, combined with adversarial training [Goodfellow et al. 2014], have shown promising results for difficult tasks such as video prediction [Mathieu et al. 2015]. In that setting, fluid simulation is a very challenging test case and our proposed method represents an important baseline in terms of both accuracy and speed.\\\"\"}", "{\"title\": \"This paper provides an efficient method for approximating the solution of the Euler Equations, which has applications in 2D and 3D simulation of fluids.\", \"rating\": \"7: Good paper, accept\", \"review\": \"I am not an expert on this domain, but the problem tackled sounds interesting and the presentation of the proposed method is clear. The presented results show good speedups compared to the baseline.\\nI am curious to know what the approximation effect on the final simulation is? In particular, how much does this approximation affect the final simulation compared to when an exact solver or other approximation techniques are used? Is there a metric that measures this approximation effect?\\nI did not understand what is hybrid about the proposed method? This is mentioned in Section 1, Page 2, last paragraph.\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
HkpbnufYe
Style Transfer Generative Adversarial Networks: Learning to Play Chess Differently
[ "Muthuraman Chidambaram", "Yanjun Qi" ]
The idea of style transfer has largely only been explored in image-based tasks, which we attribute in part to the specific nature of loss functions used for style transfer. We propose a general formulation of style transfer as an extension of generative adversarial networks, by using a discriminator to regularize a generator with an otherwise separate loss function. We apply our approach to the task of learning to play chess in the style of a specific player, and present empirical evidence for the viability of our approach.
[ "Deep learning" ]
https://openreview.net/pdf?id=HkpbnufYe
https://openreview.net/forum?id=HkpbnufYe
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HJ4l2f-ox", "B1HvH1rje", "HJ0OukHig", "SyozutTix", "BJAEVmNsl" ], "note_type": [ "official_review", "comment", "comment", "comment", "official_review" ], "note_created": [ 1489214444164, 1489462621433, 1489463413717, 1490028563025, 1489413174461 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper34/AnonReviewer1" ], [ "~Muthuraman_Chidambaram1" ], [ "~Muthuraman_Chidambaram1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper34/AnonReviewer2" ] ], "structured_content_str": [ "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper is trying to use GAN to model chess. The authors claim to use the discriminator to regularize the generator, which is doing \\u201cstyle transfer\\u201d.\\n\\nHowever, as chess is a problem which is solved by traditional search. In this paper I can\\u2019t find evidence that this generative model is better than search. I can guess the speed is better than search, but seems no evidence overall it is better.\\n\\nFor \\u201cstyle\\u201d, there is an explanation of style transfer is minimizing Maximum Mean Discrepancy [1]. In this paper, I can\\u2019t find any relation of the method is related to MMD style.\\n\\n[1] https://arxiv.org/abs/1701.01036\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Evaluation Response\", \"comment\": \"Q1: \\u201cwe do not see an evaluation on different players and overall performance, instead just one configuration.\\u201d\", \"a1\": [\"Thank you for your feedback. Due to the space limit, we left out experiments concerning other players from the results section. These experiments have been added to an appendix which is present in the revised paper.\", \"We have tested the STGAN method on 2 master players with notably unique styles: Mikhail Tal and Mikhail Chigorin. Sequences of moves generated by models trained using the latter player's data are shown in the appendix.\", \"We are interpreting your comment concerning overall performance to mean the proficiency of our trained models in comparison to other established AIs for chess. We did not have our trained models compete with other AIs, as the goal of our research was to produce an AI that could be used for pedagogical as opposed to competitive purposes. This is made more clear in the appendix of the revised paper.\"], \"q2\": \"\\u201cEvaluation of style is extremely hard even in this domain. ... the style in game playing is not about a single move, it is about a sequence of moves. A creative solution to this evaluation problem would definitely increase the quality of this work.\\u201d\", \"a2\": [\"We completely agree that evaluation of style is extremely difficult.\", \"Our evaluation on target/predicted move was done only after the sequence of moves (played up to that configuration) had been predicted. We did not make this clear in the first version of the paper, so we greatly appreciate you bringing this to our attention. This has been made more clear in the revised paper.\", \"With regards to a creative solution to the style evaluation problem, we chose to go with comparing final positions after a sequence of generated moves, specifically within chess openings. Our focus on opening sequences is due to there being well established preferences/styles for chess openings, which then lend themselves to more clear comparisons. We realize that for midgame sequences it is much more difficult to compare styles, and we note that this could be an area of future research.\"]}", "{\"title\": \"Review Response\", \"comment\": \"Q1: \\u201cHowever, as chess is a problem which is solved by traditional search. In this paper I can\\u2019t find evidence that this generative model is better than search. I can guess the speed is better than search, but seems no evidence overall it is better.\\u201d\", \"a1\": [\"Thank you for the valuable comment.\", \"Our generative model is designed to replace the traditional, heuristic-based evaluation functions used by search-based approaches for chess, as opposed to being an end-to-end chess-playing model. Move generation is done by performing a negamax search using our model.\", \"We agree that there is no evidence the model produced via style transfer outplays traditional AIs for chess. However, the goal of our paper was not to produce a more proficient AI, but rather to produce an AI that could be used for pedagogical purposes such as assisting players train against different styles of play. This has been made more clear in the appendix of our revised paper, and we thank you for bringing this to our attention.\"], \"q2\": \"\\u201cFor \\u201cstyle\\u201d, there is an explanation of style transfer is minimizing Maximum Mean Discrepancy [1]. In this paper, I can\\u2019t find any relation of the method is related to MMD style.\\u201d\", \"a2\": \"Thank you for bringing the ambiguity around the term \\\"style\\\" to our attention. We consider style transfer as a form of minimizing the EM distance discussed in the WGAN paper (https://arxiv.org/pdf/1701.07875.pdf). This is different from the style transfer discussed in the work of Gatys et al. (https://arxiv.org/pdf/1508.06576.pdf) which focuses on minimizing the mean squared distance between Gram matrices. Our notion of style is discussed more in the appendix of the revised paper.\\n \\n+ With regards to minimizing MMD, the WGAN paper has a more in-depth discussion.\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"evaluation\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This work aims to address the playing chess with a known style by merging two popular ideas GAN and style transfer.\", \"i_see_two_issues\": \"1) we do not see an evaluation on different players and overall performance, instead just one configuration. Although workshop format is almost like a proof-of-concept, a single configuration results should not be enough to convince reader that this is a viable solution/direction.\\n\\n2) Evaluation of style is extremely hard even in this domain. Evaluating on target style move vs predicted move is trivial way to go. However, no chess player can predict what a certain person would play in isolation. In other words, the style in game playing is not about a single move, it is about a sequence of moves. A creative solution to this evaluation problem would definitely increase the quality of this work.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1L-hCNtl
Generative Adversarial Learning of Markov Chains
[ "Jiaming Song", "Shengjia Zhao", "Stefano Ermon" ]
We investigate generative adversarial training methods to learn a transition operator for a Markov chain, where the goal is to match its stationary distribution to a target data distribution. We propose a novel training procedure that avoids sampling directly from the stationary distribution, while still capable of reaching the target distribution asymptotically. The model can start from random noise, is likelihood free, and is able to generate multiple distinct samples during a single run. Preliminary experiment results show the chain can generate high quality samples when it approaches its stationary, even with smaller architectures traditionally considered for Generative Adversarial Nets.
[ "Deep learning", "Unsupervised Learning" ]
https://openreview.net/pdf?id=S1L-hCNtl
https://openreview.net/forum?id=S1L-hCNtl
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkWDTivsg", "SJShbwXie", "B1aHOK6ig", "rkfqGpgie", "rkEypjDix" ], "note_type": [ "comment", "official_review", "comment", "official_review", "comment" ], "note_created": [ 1489644889494, 1489363372728, 1490028613161, 1489191561903, 1489644764000 ], "note_signatures": [ [ "~Jiaming_Song1" ], [ "ICLR.cc/2017/workshop/paper123/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper123/AnonReviewer1" ], [ "~Jiaming_Song1" ] ], "structured_content_str": [ "{\"title\": \"Responses over some concerns\", \"comment\": \"Thanks for your comments! To answer some of your concerns:\", \"q\": \"What is the advantage over (ancestral sampling) GANs?\\nOur advantage over \\\"ancestral sampling\\\" GANs is that we could use a simpler architecture (such as using MLP) to iteratively approach a sample (in the MNIST case, 2 steps seems to suffice), where realistic samples cannot be obtained through ancestral sampling. \\n\\n\\n\\n=== References ===\\nWu et al. \\\"On the Quantitative Analysis of Decoder-Based Generative Models.\\\" ICLR 2017.\\nSalimans et al. \\\"Improved techniques for training gans.\\\" NIPS 2016.\", \"a\": \"Additionally, we have some preliminary results (on CelebA) where we can alter the architecture for slower mixing, producing GSN-like results. Here:\", \"https\": \"//drive.google.com/file/d/0B0LzoDno7qkJenBWZzlaWkNxdUU/view?usp=sharing\\n\\nThis suggests that it is possible to slow down mixing if we consider other architectures for the transitions.\"}", "{\"title\": \"Review\", \"rating\": \"7: Good paper, accept\", \"review\": \"Summary: This paper proposed a novel method for training Markov chains using the indistinguishability framework popularized by generative adversarial networks. Experiments on MNIST are promising. The experiments show a clear improvement of quality as the chain size increases.\", \"novelty\": \"The idea has been featured in prior work but only amongst other ideas mixed in, but this is the first paper which describes and validates this idea alone.\", \"clarity\": \"The paper is very clearly written, and the experiments are appropriate for a workshop paper.\", \"quality\": \"Neat execution of idea. However, it's not clear whether the Markov chain is actually refining the same digit or whether the improvement is just due to more computation that a large chain can perform. Is the improvement due to the new objective, or is this just a recurrent neural network? The authors need to show a chain of the sample size from 0 to 50 to prove the refinement.\", \"pros\": \"Independent atomic description and evaluation of a significant idea.\", \"cons\": \"It's not 100% clear whether it's the MCMC that works, or just recurrent computation.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Simple, appealing idea but no clear improvement over std. GANs\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose to use an adversarial objective to train a transition operator for a Markov chain such that the stationary distribution is indistinguishable from the training data. Samples are either generated by starting from some fixed distribution \\\\pi^0 and applying the operator multiple times, or by starting from a training set sample. The idea is simple, intuitively appealing and seems to be mathematically correct.\", \"in_the_experimental_section_the_authors_apply_their_approach_to_mnist_and_show_results_for_three_differently_parameterized_transition_operators\": \"a DCGAN based architecture, a convolutional- and a fully connected neural neural network. The corresponding samples in Figs. 1-3 look decent, although not obviously better than those of other GAN based models.\\nA surprising (and potentially disappointing?) property apparent from Figs. 1-4 is, that even applying the transition operator only once, typically changes the digit-class and style completely. It therefore seems, the model uses the current state of the chain merely as a source randomness. It has not learned to \\u201crefine\\u201d the current state. Phrased more positively: the learned MC mixes extraordinary fast.\", \"positive\": \"Simple idea; straightforward implementation.\\nCombines ideas from GSNs (generative stochastic networks) and adversarial training.\", \"negative\": \"No objective way to compare model quality; no clear improvement over std. GANs.\", \"might_use_the_current_state_merely_as_source_of_randomness\": \"taking 5 or 10 steps does not provide obvious improvement over taking 2 steps.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Responses to some concerns\", \"comment\": \"Thanks for your comments! To answer some of your concerns:\", \"q\": \"Why doesn't the chain seems to refine the samples?\", \"a\": \"The model is only required to come up with one solution for the transition. Although moving slowly and seemingly refining the samples (as in GSN) is a legitimate solution, other reasonable transitions could be jumping from a sharp sample from another sharp sample. From the results in our paper, it seems that our model prefers the latter.\\n\\nThe network does not have to refine the same digit, as is performed in Infusion Training (Bordes et al. 2017). Moreover, it seems unnecessary to run a long chain to get only one sharp sample, when you can actually run a similar model to obtain multiple high-quality samples.\\n\\nNevertheless, preliminary results on CelebA show that we can reduce the mixing speed (and generate GSN-style samples) by introducing shortcut connections. Here:\", \"https\": \"//drive.google.com/file/d/0B0LzoDno7qkJenBWZzlaWkNxdUU/view?usp=sharing\\n\\n=== References ===\\nBordes et al. \\u201cLearning to Generate Samples from Noise through Infusion Training\\u201d. ICLR 2017.\"}" ] }
SkvQFOmtg
The High-Dimensional Geometry of Binary Neural Networks
[ "Alexander G. Anderson", "Cory P. Berg" ]
Traditionally, researchers thought that high-precision weights were crucial for training neural networks with gradient descent. However, recent research has obtained a finer understanding of the role of precision in neural network weights. One can train a NN with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of theoretical analysis to explain why we can effectively capture the features in our data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the Courbariaux, Hubara et al. (2016) method work because of the high-dimensional geometry of binary vectors. In particular, the continuous vectors that extract out features in these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated the viability of such BNNs, our work explains why these BNNs work in terms of the geometry of high-dimensional binary vectors. Our theory serves as a foundation for understanding not only BNNs but networks that make use of low precision weights and activations. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks.
[ "Theory", "Deep learning" ]
https://openreview.net/pdf?id=SkvQFOmtg
https://openreview.net/forum?id=SkvQFOmtg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "SJEc-jgig", "H1N1ArDil", "Byjqo1Ejl", "r13U_jxsl", "H1lrb2Hsx", "HyRfOKpjg", "Hy69c_lsx" ], "note_type": [ "comment", "comment", "official_comment", "official_review", "comment", "comment", "official_review" ], "note_created": [ 1489183115575, 1489620444377, 1489398674992, 1489184852374, 1489514808274, 1490028566208, 1489173141287 ], "note_signatures": [ [ "~Alexander_G_Anderson1" ], [ "~Alexander_G_Anderson1" ], [ "ICLR.cc/2017/workshop/paper43/AnonReviewer1" ], [ "ICLR.cc/2017/workshop/paper43/AnonReviewer2" ], [ "~Alexander_G_Anderson1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper43/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Response to Official Review\", \"comment\": \"Thank you very much for your comments. We\\u2019ll definitely consider these things as we put together a full write-up of this work. To directly respond to your questions / points:\\n\\n-\\tIn the caption of the second figure, we state that the entries are chosen uniformly from the interval [-1, 1]. We didn\\u2019t contain the full calculations in the write-up due to space constraints, but the calculations are straightforward since each component of the vectors in the dot products are independent. E.g. if p(x) is the pdf of one of the components of the random vector, the p(z) = sum_{x,y} delta(z-xy) p(x)p(y) is the distribution of one of the components in the dot product of two random vectors. The normalized dot product is Z_n = (z_1+z_2+\\u2026+z_n)/n. Using the CLT, we just need the mean and variance of p(z) to get the large n behavior. Likewise, the continuous binary dot product is computed by replacing z=xy with z=x * sign(y). The angles are found by taking the inverse cosine of the normalized dot products. \\n-\\t\\n-\\tAs far as the magnitude of the dot products are concerned, in the caption of the third figure addresses this issue. After each dot product, there is a batch norm layer so any scaling constant is normalized out. For the sake of clarity, we will adjust the equations to say approximately equal up to a constant, which is subsequently normalized out. \\n-\\t\\n-\\tWhy we cannot just post-hoc binarize the weights of a network is an important question that is worth thinking about in more detail and is something that we\\u2019ve thought a lot about. We briefly comment on this on the last paragraph on page two, mentioning that the dot product relation isn\\u2019t going to be true in general, but is a consequence of the learning dynamics set up by the algorithm. Moving beyond what we wrote in the abstract, there are two methods for addressing this. First, as we show in Fig 1b, on the backward pass, we treat w_b = w_c when they are in [-1, 1] [so w_b * x = w_c * x]. We will include this point directly in an updated version of the paper. The connection to exactly how this impacts the learning dynamics hasn\\u2019t been worked out fully. Second, there are going to be many solutions that could be found that are in the form of a multilayer perceptron. Some of those are going to use more than one bit of precision per activation. However, when we use a learning algorithm that forces the activations to be one bit, this means that the network cannot achieve a solution and is forced to spread informative information to be spread across activations instead of concentrated in one activation. \\n-\\t\\n-\\tI agree that a more extensive review of previous work would be good, but was omitted due to space constraints. Rastegari et al. 2016 (which was cited) already contains an extensive literature review. Thanks for the reference to that paper as I have not seen that particular paper. I agree that it would be an interesting experiment to study a comparison where you learn a continuous perceptron and binarize the weights. I am unsure if this will work for the reasons that you are suggesting above. This method you suggest would be different than what is currently being done in this paper where the weights are binary in the forward pass of the network, and in the backward pass, the errors are propagated to a latent continuous variable that can accumulate small changes from stochastic gradient descent. \\n-\\t\\n-\\tThat is an interesting question to think about the sparsity. Perhaps this would be better treated in a framework were the weights were either -1, 0, +1. In the binary case, it isn\\u2019t clear how to discuss sparsity. \\n\\nThanks again for your feedback, and we\\u2019d be happy to answer any more questions that you have and to consider any other suggestions for new experiments that you might have. After your response, I'd be happy to update the abstract with your comments in mind.\"}", "{\"title\": \"revision submitted\", \"comment\": \"I submitted a revision to the abstract (and some additional SI) that is my response to your helpful feedback. One key point that I've added is that the high dimensionality makes it easy for the learning algorithm to approximate the informative directions with a binary vector. It is also important to note that the magnitude of the dot products are scaled out by both the batch norm (additive constant set to zero), and the subsequent binarization of the activations. While in a typical network, the network may rely on the magnitude of the dot products to code useful information, these networks are trained with that being possible, and is forced to find a solution where the magnitudes of dot products are not used.\\n\\nThe experiments that you suggest to disentangle the learning algorithm and the high dimensions will be important to think about when we take this workshop abstract submission and flesh it out into a full paper. \\n\\nIf you have other thoughts and suggestions, I'm all ears.\"}", "{\"title\": \"High dimensions or special learning dynamics?\", \"comment\": \"I still think there are some conceptual issues here.\\n\\n-The paper purports to give a general explanation for why binary weights work, based on high dimensional spaces. Eg, \\\"binary neural networks work because of the geometry of high-dimensional binary vectors.\\\" The contention is that, usually, binarizing a continuous vector will yield a vector close in angle to the original. If this is in fact the basis for good performance, then binarizing standard continuous nets should yield high performance, but it does not in general. Moreover, as discussed earlier, this only preserves the angle but not the magnitude of the resulting vector, which can be highly significant with nonlinear activation functions.\\n\\n-In light of the above, the paper and author response quickly retreats from this position to say that one particular scheme for training BNNs, which includes several features such as batch norm, manages to enforce w_binary \\\\dot x = w_continuous \\\\dot x. So on this view, it is something distinctive about the learning process which yields good performance, not the fact that the vectors are high dimensional. Indeed it seems quite likely that w_binary \\\\dot x = w_continuous \\\\dot x is a sufficient condition for good performance, even in low dimensions, and so it is not surprising that successful learning schemes would approximately satisfy this constraint. This is an entirely different explanation, unrelated to high dimensions.\\n\\nThe paper might benefit by being reframed from addressing BNNs in general to this specific learning scheme. And it could also be strengthened by more directly investigating the role of high dimensions vs the role of the learning scheme. Empirical investigations which separate these two effects would be very interesting.\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"This paper attempts to explain the recent practical success of binary weight neural networks by using the relationships between angles of binarized or non-binarized vectors in high dimensional spaces. Even though the paper is short, I think the main idea presented in the paper is important and insightful, and it can lead to interesting follow-up work and would be a good contribution to the workshop track.\\n\\nI agree with all of the other reviewer's points (including the questions that the paper addresses however does not answer, which I consider to be okay for a workshop paper considering the ideas that it contributes), yet I think the paper is sufficient for acceptance. The exposition is a bit short and you can include your derivations even though you think it is straightforward, for the sake of completeness and ease of reading.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reply to high dimensions or learning dynamics\", \"comment\": [\"Thanks for your thoughtful comments \\u2013 it is very helpful for me to get this feedback so I can continue to further this work.\", \"I agree that I should clarify that this analysis is intended to be about neural networks with binary weights and activations of the flavor of that is detailed in Coubariaux, Hubara et al 2016. I made the mistake of referring to the neural networks with binary weights and activations as BNNs, which is misleading. [As an aside, I wasn\\u2019t aware of an alternative method of training such networks from scratch until you told me about the exhaustive search method]. I\\u2019ll make the abstract more clear.\", \"Building off of what you are saying, the connection between the learning dynamics and the high dimensionality is that it is easy to satisfy w^b x = w^c x when you are in high dimensions. I\\u2019ll make this more clear.\", \"I agree that doing more experiments to separate out the effects of high dimensions and the learning would be useful for pushing this work further.\", \"Again, thanks a lot for your comments and I\\u2019ll be updating the paper (I\\u2019ll submit the revision by tomorrow at the latest).\"]}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Deep networks typically rely on relatively high precision arithmetic. There are a variety of advantages to reducing this precision requirement, by coming up with high performance networks with, eg, binary weights. Recent methods have been able to achieve good performance with binary weights (though they make use of continuous weights during training). This paper attempts to explain this performance with the simple but potentially important observation that the angle between a continuous weight vector and its binarized version is typically much larger than the angle between two random vectors in high dimensions.\\n\\n-The theory could be more explicitly spelled out. Suppose the weight vectors are drawn randomly from a Gaussian or uniform(-1,1) distribution. What is the resulting distribution of the angle with the binarized vector?\\n\\n-The argument for the prediction that the inner product with the binarized weights should be equal to the inner product with the continuous weights would appear to be neglecting the potential change in the norm of the weights. Eg, w^c \\\\dot x will only equal w^b \\\\dot x if the norm of w^c and w^b are similar. One possibility is that this highlights that a binarized network can only be sensitive to the angle to an input, but will struggle to be sensitive to the magnitude of an input. The magnitude can be extremely important: it may decide whether the activity lies in the saturating regime of a sigmoid, for instance. \\n\\n-It seems like a prediction of the theory is that, if the binarized copy is giving the same value as the original, you should be able to just train a standard continuous network and afterward binarize all the weights and obtain good performance. That this does not work may point to the significance of the magnitude of the weights.\\n\\n-The paper could benefit from a more extensive overview of related work. Seung, Sompolinksy, & Tishby Physical Review A 1992, for instance, work out the training and generalization error of a perceptron with binary synapses. It would be useful to contrast this solution (where they directly search in the discrete weight space), with the gen/train error achieved by first learning a continuous perceptron and then binarizing the weights, using the arguments developed here.\\n\\n-How does this interact with sparsity in the weights?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
By1eEXVFg
Adversarial examples for generative models
[ "Jernej Kos", "Ian Fischer", "Dawn Song" ]
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present two classes of attacks on the VAE-GAN architecture and demonstrate them against networks trained on MNIST, SVHN, and CelebA. Our first attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our second attack moves beyond relying on the standard loss for computing the gradient and directly optimizes against differences in source and target latent representations. We additionally present an interesting visualization, which gives insight into how adversarial examples appear in generative models.
[ "Deep learning" ]
https://openreview.net/pdf?id=By1eEXVFg
https://openreview.net/forum?id=By1eEXVFg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "Hktxjaxix", "r1iQuFaie", "ryhXtpp9e", "BJn3uHP9g" ], "note_type": [ "official_review", "comment", "comment", "official_review" ], "note_created": [ 1489193713559, 1490028579129, 1488996643925, 1488570548327 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper68/AnonReviewer2" ], [ "ICLR.cc/2017/pcs" ], [ "~Jernej_Kos1" ], [ "ICLR.cc/2017/workshop/paper68/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Interesting exploration\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The work explores whether one can perturb the input of a VAE (or VAE-GAN) imperceptibly in such a way that the reconstruction resembles a target sample. The paper shows two ways to do so.\\nIt is not particularly surprising that an inference network in a VAE is susceptible to adversarial examples (in fact the opposite would be more interesting). It is after all a neural network with only a bit of noise added to it, and the susceptibility of such networks to adversarial attacks was established before. But it is nevertheless a finding worth reporting.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"decision\": \"Reject\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"Updates\", \"comment\": \"Thank you for your comments!\\n\\nWe agree that the previous description of the attack was too abstract. We have now fixed it to include a specific example involving exchanging a compressed image between two parties. The new paragraph is as follows:\\n\\n===\\nSpecifically, we consider an attack where the latent representation is used as a form of compression when transmitting an image between two parties. The attacker\\u2019s goal is to convince the sender to transmit an image of the attacker\\u2019s choosing to the receiver, but the attacker has no direct control over the bytes sent between the two parties. The sender believes that the receiver will reconstruct the same image that he sees, but if the attack is successful, the receiver will in fact reconstruct an image chosen by the attacker. \\n===\\n\\nBased on your comments, we have now also improved both Figures 1 and 2 to show everything in a single place. The figures now show the original image, the adversarial examples for both methods and reconstructions of original images and adversarial examples.\"}", "{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"CONTRIBUTIONS\\n\\nThis paper builds upon recent work on adversarial examples -- which shows that one can maliciously craft an input to a classifier using various approaches and have the classifier recognize this input as any desired class -- and extends it to the realm of deep generative models.\", \"it_claims_two_main_contributions\": \"- It shows the existence of two attack vectors on encode/decode-type generative models and proposes a method to perform each of them.\\n- It provides a new visualization tool which gives \\\"[...] insight into how adversarial examples appear in generative models.\\\"\\n\\nNOVELTY, CLARITY, SIGNIFICANCE, QUALITY\\n\\nThe idea of the existence of adversarial examples in generative models is new to me, and I think it is very relevant to the field.\\n\\nI find the definition of adversarial examples for generative models proposed in the introduction confusing.\\n\\nThe sentence \\\"Specifically, if the person doing the encoding step is separated from the person doing the decoding step, the attacker may be able to cause the encoding party to believe they have encoded a particular message for the decoding party, but in reality they have encoded a different message of the attacker\\u2019s choice.\\\" is vague and does not give a concrete picture of what attack scenario the authors have in mind.\\n\\nSimilarly, the sentence \\\"Our results show that these attack methods are effective and VAE and VAE-GAN can be easily fooled.\\\" does not explain what exactly VAEs and VAE-GANs would be fooled into doing.\\n\\nFortunately, the \\\"Methods\\\" section offers clarification: in the L_VAE case we are looking for a perturbation of a source input such that its reconstruction matches that of a target input, whereas in the latent attack case we are looking for a perturbation of a source input such that its latent representation is close to that of a target input. In both cases we want to keep the adversarial input as close to the source input as possible.\\n\\nI find Figures 1 and 2 difficult to parse. It appears to me that only the reconstructions are shown; in the absence of the original images and their respective perturbations I can't really assess how close to the original the perturbations are and therefore how successful the attack is. Looking at Figures 5 and 6 in the appendix makes me think that the perturbations are indeed pretty close to the original, but I think having the original, its reconstruction, the perturbation and its reconstruction side by side would go a long way towards making the results look more convincing.\\n\\nOverall I feel like this work shows great potential, but the lack of clarity gets in the way of the results. I would be inclined to recommend acceptance if the introduction and figure presentation were reworked to improve clarity.\\n\\nPROS (+), CONS (-)\\n\\n+ Subject is relevant and novel\\n+ Some of the results, especially in the appendix, show good potential\\n- Confusing introduction\\n- Figures are not very well explained and contextualized\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rJv6ZgHYg
Deep Nets Don't Learn via Memorization
[ "David Krueger*", "Nicolas Ballas*", "Stanislaw Jastrzebski*", "Devansh Arpit*", "Maxinder S. Kanwal", "Tegan Maharaj", "Emmanuel Bengio", "Asja Fischer", "Aaron Courville" ]
We use empirical methods to argue that deep neural networks (DNNs) do not achieve their performance by \textit{memorizing} training data, in spite of overly-expressive model architectures. Instead, they learn a simple available hypothesis that fits the finite data samples. In support of this view, we establish that there are qualitative differences when learning noise vs.~natural datasets, showing that: (1) more capacity is needed to fit noise, (2) time to convergence is longer for random labels, but \emph{shorter} for random inputs, and (3) DNNs trained on real data examples learn simpler functions than when trained with noise data, as measured by the sharpness of the loss function at convergence. Finally, we demonstrate that for appropriately tuned explicit regularization, e.g.~dropout, we can degrade DNN training performance on noise datasets without compromising generalization on real data.
[ "Deep learning", "Optimization" ]
https://openreview.net/pdf?id=rJv6ZgHYg
https://openreview.net/forum?id=rJv6ZgHYg
ICLR.cc/2017/workshop
2017
{ "note_id": [ "HkhwFAgsx", "ryAeRieil", "r1UQHBPjg", "S1nP_tTsx", "SylkOteox" ], "note_type": [ "official_review", "comment", "comment", "comment", "official_review" ], "note_created": [ 1489197412525, 1489186293853, 1489618205912, 1490028643959, 1489176536346 ], "note_signatures": [ [ "ICLR.cc/2017/workshop/paper164/AnonReviewer2" ], [ "~David_Krueger1" ], [ "~David_Krueger1" ], [ "ICLR.cc/2017/pcs" ], [ "ICLR.cc/2017/workshop/paper164/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Interesting observations but not convincing enough\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper tries to argue the incorrectness of the existing claim by Zhang that deep neural network learns to memorizing training data.\\n\\nFirstly, as pointed out by the first reviewer, the term memorization in the two papers are different. The claims are not completely contradict. For example, in Zhang's paper, they claim that \\\"neural networks are able to capture the remaining signal in the data, while at the same time fit the noisy part using brute-force\\\". In this paper, the authors claims \\\"we believe that DNNs first learn and then refine simple patterns, which are shared across examples, in order to quickly drive down training loss, and only incorporate more case-by-case memorization as a later resort\\\". Basically, the learning of DNNs are capable of identifying patterns in structured data, but with random noise where no shared patterns could be extracted across target classes, sample dependent hypotheses are learnt given enough capacity. Hence, the two papers are claim the same DNN learning behavior.\\n\\nOn the experimental setup, the datasets used for both papers have only a small amount of target classes. Even when randomizing the labels, it's still labeled as one of the 10 classes. DNNs learn to fit to the training data, which guides DNNs to find shared patterns within samples of the same target class. A naive thought if there are the same number of classes as the number of samples in the training data, how does DNNs learn. If DNNs still learn to model the boundaries, there will still be some generalization capability. Otherwise, if DNNs learn by memorization, it should no generalize at all. Because the target labels are not pushing the model to extract shared patterns. Just random thought not sure useful or not.\\n\\nRegarding the claim \\\"DNNs trained on real data examples learn simpler functions than when trained with noise data, as measured by the sharpness of the loss function at convergence\\\". Personally, I believe that relates more to the data, as with noise data, true function DNNs try to learn are very different from the clean data. Hence, it's not all about how DNNs learn. The difference in the target function DNNs have to learn also affects. \\n\\nThe experiments in this papers do shown different perspectives from what Zhang reported. It would be of interest of many readers, but it would be a more stronger case if clearer definitions of terms such as memorization are defined and more experiments to support the claims.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clarifying the notion of memorization and our differences with Zhang et al.\", \"comment\": \"We believe there is a conflict, although we agree that DNNs ability to both \\n1) generalize on real data\\nand \\n2) fit noise \\nposes a challenge for traditional learning theory.\\n\\nFirst, to clarify, we intend to use the same intuitive notion of memorization as Zhang et al.\\nWhile neither of us provide a rigorous definition of memorization, we can characterize what properties we expect learning-via-memorization to have.\\n\\nFor instance, in our view, the data distribution should not have a strong, consistent effect on the learning problem for an algorithm using \\u201cbrute-force memorization\\u201d.\\nZhang et al. seem to share this view, claiming that \\u201c3. Randomizing labels is solely a data transformation, leaving all other properties of the learning problem unchanged.\\\"\\nHowever, in sections 2.1 and 2.3, we contradict this claim, demonstrating qualitative differences in the learning problem on real vs. random data.\\n\\nFurthermore, our results in section 2.1, contradict the claim: \\u201cIn fact, training time increases only by a small constant factor compared with training on the true labels.\\u201d\\nWe show that the (factor of) increase in training time is *not* constant; it is sensitive to both capacity and number of training examples.\\n\\nWe thank you for the feedback, and apologize for not making these points clearer in the paper.\\nWe hope this comment clarifies our stance with respect to memorization and the previous work of Zhang et al. in a way that highlights our contributions.\"}", "{\"title\": \"More on our differences with Zhang et al.; Memorization *operationalized* not defined.\", \"comment\": \"I agree that a more rigorous definition of memorization would be valuable.\\nAs I mentioned in response to the other reviewer, I believe that our use of \\u201cmemorization\\u201d is consistent with Zhang et al., and that our results and conclusions are different in some important ways, although certainly not completely contradictory.\\nI think you can view both of our papers as operationalizing \\u201cmemorization\\u201d as \\u201cthe way in which a DNN fits random (unstructured) data\\u201d.\\n\\nZhang et al. emphasize these similarities in fitting data vs. noise:\\n1. it\\u2019s possible to achieve perfect (or near perfect) training accuracy \\n2. training times are not radically different in these two scenarios\", \"we_emphasize_these_differences\": \"1. the difference in training time is not a constant factor; rather it depends on capacity and the number of training examples.\\n2. the response to regularization is different\\n3. the complexity of what is learned is different\\n\\nRE your explanation of point 3, I agree that it is about the data, but:\\n1. There are many ways to fit the training set, not a single target function.\\n2. In particular, if it were the case that, \\u201c3. Randomizing labels is solely a data transformation, leaving all other properties of the learning problem unchanged,\\u201d\\nas Zhang et al. claim, then we would expect that when training on random/real labels, a similar solution can and would be found.\\n\\n\\nI agree that the suggested experiment would be interesting, but disagree that memorization would imply no generalization.\\nK-NN, for instance, memorizes the data, but *can* still generalize somewhat.\"}", "{\"decision\": \"Accept\", \"title\": \"ICLR committee final decision\"}", "{\"title\": \"No consensus on what memorization means\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper reads as a rebuttal to the recent work by Zhang. In this paper, it was shown that neural nets are in principle able to shatter the input data in the conducted experiments, which means that any labelling can be represented by the network:\\n\\n\\\"The experiments we conducted emphasize that the effective capacity of several successful neural network architectures is large enough to shatter the training data. Consequently, these models are in principle rich enough to memorize the training data.\\\"\\n\\nThis paper, however defines memorization as a failure to generalize.\\n\\nHence, both papers are based on different definitions of what memorization means. I felt it would have been much better if the authors of this paper had followed along the definition of Zhang et al. My fear is that this confusion is actually hurting, as it suggest a conflict where \\u2013 in my opinion \\u2013 none exists.\\n\\n\\nNevertheless, the paper has some interesting experiments.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }