forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
msGKsXQXNiCBk
Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
[ "Danqi Chen", "Richard Socher", "Christopher Manning", "Andrew Y. Ng" ]
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%.
[ "new facts", "knowledge bases", "neural tensor networks", "semantic word vectors", "relations", "entities", "model", "database", "bases", "applications" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=msGKsXQXNiCBk
https://openreview.net/forum?id=msGKsXQXNiCBk
ICLR.cc/2013/conference
2013
{ "note_id": [ "OgesTW8qZ5TWn", "PnfD3BSBKbnZh", "yA-tyFEFr2A5u", "7jyp7wrwSzagb" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1363419120000, 1362079260000, 1362246000000, 1363419120000 ], "note_signatures": [ [ "Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng" ], [ "anonymous reviewer 75b8" ], [ "anonymous reviewer 7e51" ], [ "Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng" ] ], "structured_content_str": [ "{\"review\": \"We thank the reviewers for their comments and agree with most of them.\\n\\n- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012). \\n Experimental results show that our model also outperforms this model in terms of ranking & classification.\\n\\n- We didn't report the results on the original data because of the issues of overlap between training and testing set. \\n 80.23% of the examples in the testing set appear exactly in the training set.\\n 99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric. \\n Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model.\\n\\n- The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking. \\n\\n- To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper. \\n\\n- The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'.\"}", "{\"title\": \"review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and\\n Semantic Word Vectors\", \"review\": \"- A brief summary of the paper's contributions, in the context of prior work.\\n\\nThis paper proposes a new energy function (or scoring function) for ranking pairs of entities and their relationship type. The energy function is based on a so-called Neural Tensor Network, which essentially introduces a bilinear term in the computation of the hidden layer input activations of a single hidden layer neural network. A favorable comparison with the energy-function proposed in Bordes et al. 2011 is presented.\\n\\n- An assessment of novelty and quality.\\n\\nThis work follows fairly closely the work of Border et al. 2011, with the main difference being the choice of the energy/scoring function. This is an advantage in terms of the interpretability of the results: this paper clearly demonstrates that the proposed energy function is better, since everything else (the training objective, the evaluation procedure) is the same. This is however a disadvantage in terms of novelty as this makes this work somewhat incremental.\\n\\nBordes et al. 2011 also proposed an improved version of their model, using kernel density estimation, which is not used here. However, I suppose that the proposed model in this paper could also be similarly improved.\\n\\nMore importantly, Bordes and collaborators have more recently looked at another type of energy function, in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012), which also involves bilinear terms and is thus similar (but not the same) as the proposed energy function here. In fact, the Bordes et al. 2012 energy function seems to outperform the 2011 one (without KDE), hence I would argue that the former would have been a better baseline for comparisons.\\n\\n- A list of pros and cons (reasons to accept/reject).\", \"pros\": \"Clear demonstration of the superiority of the proposed energy function over that of Bordes et al. 2011.\", \"cons\": \"No comparison with the more recent energy function of Bordes et al. 2012, which has some similarities to the proposed Neural Tensor Networks.\\n\\nSince this was submitted to the workshop track, I would be inclined to have this paper accepted still. This is clearly work in progress (the submitted paper is only 4 pages long), and I think this line of work should be encouraged. However, I would suggest the authors also perform a comparison with the scoring function of Bordes et al. 2012 in future work, using their current protocol (which is nicely setup so as to thoroughly compare energy functions).\"}", "{\"title\": \"review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and\\n Semantic Word Vectors\", \"review\": \"This paper proposes a new model for modeling data of multi-relational knowledge bases such as Wordnet or YAGO. Inspired by the work of (Bordes et al., AAAI11), they propose a neural network-based scoring function, which is trained to assign high score to plausible relations. Evaluation is performed on Wordnet.\\n\\nThe main differences w.r.t. (Bordes et al., AAAI11) is the scoring function, which now involves a tensor product to encode for the relation type and the use of a non-linearity. It would be interesting if the authors could comment the motivations of their architecture. For instance, what does the tanh could model here?\", \"the_experiments_raise_some_questions\": \"- why do not also report the results on the original data set of (Bordes et al., AAAI11)? Even, is the data set contains duplicates, this stills makes a reference point.\\n- the classification task is hard to motivate. Link prediction is a problem of detection: very few positive to find in huge set of negative examples. Transform that into a balanced classification problem is a non-sense to me.\\n\\nThere have been several follow-up works to (Bordes et al., AAAI11) such as (Bordes et al., AISTATS12) or (Jenatton et al., NIPS12), that should be cited and discussed (some of those involve tensor for coding the relation type as well). Besides, they would also make the experimental comparison stronger.\\n\\nIt should be explained how the pre-trained word vectors trained by the model of Collobert & Weston are use in the model. Wordnet entities are senses and not words and, of course, there is no direct mapping from words to senses. Which heuristic has been used?\", \"pros\": [\"better experimental results\"], \"cons\": [\"skinny experimental section\", \"lack of recent references\"]}", "{\"review\": \"We thank the reviewers for their comments and agree with most of them.\\n\\n- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012). \\n Experimental results show that our model also outperforms this model in terms of ranking & classification.\\n\\n- We didn't report the results on the original data because of the issues of overlap between training and testing set. \\n 80.23% of the examples in the testing set appear exactly in the training set.\\n 99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric. \\n Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model.\\n\\n- The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking. \\n\\n- To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper. \\n\\n- The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'.\"}" ] }
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
reject
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
ICLR.cc/2013/conference
2013
{ "note_id": [ "rGZJRE7IJwrK3", "4Uh8Uuvz86SFd", "7to37S6Q3_7Qe", "tb0cgaJXQfgX6", "7Kq-KFuY-y7S_", "Qj1vSox-vpQ-U" ], "note_type": [ "review", "comment", "review", "review", "review", "review" ], "note_created": [ 1392852360000, 1363212060000, 1362321600000, 1363477320000, 1365121080000, 1362219360000 ], "note_signatures": [ [ "Charles Martin" ], [ "Cédric Bény" ], [ "anonymous reviewer 441c" ], [ "Aaron Courville" ], [ "Yann LeCun" ], [ "anonymous reviewer acf4" ] ], "structured_content_str": [ "{\"review\": \"It is noted that the connection between RG and multi-scale modeling has been pointed out by Candes in\\n\\nE. J. Cand\\u00e8s, P. Charlton and H. Helgason. Detecting highly oscillatory signals by chirplet path pursuit. Appl. Comput. Harmon. Anal. 24 14-40.\\n\\nwhere it was noted that the multi-scale basis suggested in this convex optimization approach is equivalent to the Wilson basis from his original work on RG theory in the 1970s\"}", "{\"reply\": \"I have submitted a replacement to the arXiv on March 13, which should be available the same day at 8pm EST/EDT as version 4.\\n\\nIn order to address the first issue, I rewrote section 2 to make it less confusing, specifically by not trying to be overly general. I also rewrote the caption of figure 1 to make it a nearly self-contained explanation of what the model is for a specific one-dimensional example. The content of section 2 essentially explains what features must be kept for any generalization, and section 3 clarifies why these features are important. \\n\\nConcerning the second issue, I agree that this work is preliminary, and implementation is the next step.\"}", "{\"title\": \"review of Deep learning and the renormalization group\", \"review\": \"The model tries to relate renormalization group and deep learning, specifically hierarchical Bayesian network. The primary problems are that 1) the paper is only descriptive - it does not explain models clearly and precisely, and 2) it has no numerical experiments showing that it works.\", \"what_it_needs_is_something_like\": \"1) Define the DMRG (or whatever verion of RG you need) and Define the machine learning model. Do these with explicit formulas so reader can know what exactly they are. Things like 'Instead, we only allow for maps \\u03c0j which are local in two important ways: firstly, each input vertex can only causally influence the values associated with the m output vertices that it represents plus all kth degree neighbors of these, where k would typically be small' are very hard to follow.\\n\\n2) Show the mapping between the two models. \\n\\n3) Show what it does on real data and that it does something interesting and/or useful. (Real data e.g. sound signals, images, text,...)\"}", "{\"review\": \"Reviewer 441c,\\n\\nHave you taken a look at the new version of the paper? Does it go some way to addressing your concerns?\"}", "{\"review\": \"It seems to me like there could be an interesting connection between approximate inference in graphical models and the renormalization methods.\\n\\nThere is in fact a long history of interactions between condensed matter physics and graphical models. For example, it is well known that the loopy belief propagation algorithm for inference minimizes the Bethe free energy (an approximation of the free energy in which only pairwise interactions are taken into account and high-order interactions are ignored). More generally, variational methods inspired by statistical physics have been a very popular topic in graphical model inference.\\n\\nThe renormalization methods could be relevant to deep architectures in the sense that the grouping of random variable resulting from a change of scale could be be made analogous with the pooling and subsampling operations often used in deep models. \\n\\nIt's an interesting idea, but it will probably take more work (and more tutorial expositions of RG) to catch the attention of this community.\"}", "{\"title\": \"review of Deep learning and the renormalization group\", \"review\": \"This paper discusses deep learning from the perspective of renormalization groups in theoretical physics. Both concepts are naturally related; however, this relation has not been formalized adequately thus far and advancing this is a novelty of the paper. The paper contains a non-technical and insightful exposition of concepts and discusses a learning algorithm for stochastic networks based on the `multiscale entanglement renormalization ansatz' (MERA). This contribution will potentially evoke the interest of many readers.\"}" ] }
SqNvxV9FQoSk2
Switched linear encoding with rectified linear autoencoders
[ "Leif Johnson", "Craig Corcoran" ]
Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions.
[ "linear", "models", "rectified linear autoencoders", "machine learning", "formal connections", "autoencoders", "neural network models", "inputs", "sparse coding" ]
reject
https://openreview.net/pdf?id=SqNvxV9FQoSk2
https://openreview.net/forum?id=SqNvxV9FQoSk2
ICLR.cc/2013/conference
2013
{ "note_id": [ "ff2dqJ6VEpR8u", "kH1XHWcuGjDuU", "oozAQe0eAnQ1w" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362252900000, 1361946600000, 1362360840000 ], "note_signatures": [ [ "anonymous reviewer 5a78" ], [ "anonymous reviewer 9c3f" ], [ "anonymous reviewer ab3b" ] ], "structured_content_str": [ "{\"title\": \"review of Switched linear encoding with rectified linear autoencoders\", \"review\": \"In the deep learning community there has been a recent trend in\\nmoving away from the traditional sigmoid/tanh activation function to \\ninject non-linearity into the model. One activation function that has \\nbeen shown to work well in a number of cases is called Rectified \\nLinear Unit (ReLU). \\nBuilding on the prior research, this paper aims to provide an \\nanalysis of what is going on while training networks using these \\nactivation functions, and why do they work. In particular the authors \\nprovide their analysis from the context of training a linear auto-encoder \\nwith rectified linear units on a whitened data. They use a toy dataset in \\n3 dimensions (gaussian and mixture of gaussian) to conduct the analysis. \\nThey loosely test the hypothesis obtained from the toy datasets on the \\nMNIST data.\\n\\nThough the paper starts with a lot of promise, unfortunately it fails to \\ndeliver on what was promised. There is nothing in the paper (no new \\nidea or insight) that is either not already known, or fairly straightforward \\nto see in the case of linear auto-encoders trained using a rectified \\nlinear thresholding unit. Furthermore there are a number of flaws in \\nthe paper. For instance, the analysis of section 3.1 seems to be a bit \\nmis-leading. By definition if one fixes the weight vector w to [1,0] there \\nis no way that the sigmoid can distinguish between x's which are \\ngreater than S for some S. However with the weight vector taking \\narbitrary continuous values, that may not be the case. Besides, the \\npurpose of the encoder is to learn a representation, which can best \\nrepresent the input, and coupled with the decoder can reconstruct it. \\nThe encoder learning an identity function (as is argued in the paper) is not \\nof much use. Finally, the whole analysis of section 3 was based on a \\nlinear auto-encoder, whose encoder-decoder weights were tied. However \\nin the case of MNIST the authors show the filters learnt from an untied \\nweight auto-encoder. There seems to be some disconnect there. \\n\\nIn short the paper does not offer any novel insight or idea with respect \\n to learning representation using auto-encoders with rectified linear \\nthresholding function. Various gaps in the analysis also makes it a not \\nvery high quality work.\"}", "{\"title\": \"review of Switched linear encoding with rectified linear autoencoders\", \"review\": \"This paper analyzes properties of rectified linear autoencoder\\nnetworks. \\n\\nIn particular, the paper shows that rectified linear networks are\\nsimilar to linear networks (ICA). The major difference is the\\nnolinearity ('switching') that allows the decoder to select a subset\\nof features. Such selection can be viewed as a mixture of ICA models.\\n\\nThe paper visualizes the hyperplanes learned for a 3D dataset and\\nshows that the results are sensible (i.e., the learned hyperplanes\\ncapture the components that allow the reconstruction of the data).\", \"some_comments\": \"- On the positive side, I think that the paper makes a interesting attempt to understand properties of nonlinear networks, which is typically hard because of the nonlinearities. The choice of the activation function (rectified linear) makes such analysis possible. \\n\\n- I understand that the paper is mainly an analysis paper. But I feel\\n that it seems to miss a strong key thesis. It would be more interesting that the analysis reveals surprising/unexpected results.\\n\\n- The analyses do not seem particularly deep nor surprising. And I do\\n not find that they can advance our field in some way. I wonder if it's possible to make the analysis more constructive so that we can improve our algorithms. Or at least the analyses can reveal certain surprising properties of unsupervised algorithms.\\n\\n- It's unclear the motivation behind the use of rectified linear\\n activation function for analysis. \\n\\n- The paper touches a little bit on whitening. I find the section on\\n this topic is unsatisfying. It would be good to analyse the role of whitening in greater details here too (as claimed by abstract and introduction).\\n\\n- The experiments show that it's possible to learn penstrokes and\\n Gabor filters from natural images. But I think this is no longer\\n novel. And that there are very few practical implications of\\n this work.\"}", "{\"title\": \"review of Switched linear encoding with rectified linear autoencoders\", \"review\": \"The paper draws links between autoencoders with tied weights and rectified linear units (similar to Glorot et al AISTATS 2011), the triangle k-means and soft-thresholding of Coates et al. (AISTATS 2011 and ICML 2011), and the linear-autoencoder-like ICA learning criterion of Le et al (NIPS 2011).\\nThe first 3 have in common that, for each example, they yield a subset of non-zero (active) hidden units, that result from a simple thresholding. And it is argued that the training objective thus restricted to that subset corresponds to that of Le et al's ICA. Many 2D and 3D graphics with Gaussian data try to convey a geometric intuition of what is going on. \\n\\nI find rather obvious that these methods switch on a different linear basis for each example. The specific conection highlighted with Le et al's ICA work is more interesting, but it only applies if L1 feature sparsity regularization is employed in addition to the rectified linear activation function.\\n\\nAt the present stage, my impression is that this paper mainly reflect on the authors' maturing perception of links between the various methods, together with their building of an intuitive geometric understanding of how they work. But it is not yet ripe and its take home message not clear.\\nWhile its reflections are not without basis or potential interest they are not currently sufficiently formally exposed and read like a set of loosely bundled observations. I think the paper could greatly benefit from a more streamlined central thesis and message with supporting arguments.\\n\\nThe main empirical finding from the small experiments in this paper seems to be that the training criterion tends to yield pairs of opposed (negated) feature vectors. What we should conclude from this is however unclear.\\n \\nThe graphics are too many. Several seem redundant and are not particularly enlightening for our understanding. Also the use of many Gaussian data examples seems a poor choice to highlight or analyse the switching behavior of these 'switched linear coding' techniques (what does switching buy us if a PCA can capture about all there is about the structure?).\"}" ] }
DD2gbWiOgJDmY
Why Size Matters: Feature Coding as Nystrom Sampling
[ "Oriol Vinyals", "Yangqing Jia", "Trevor Darrell" ]
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.
[ "nystrom", "data points", "size matters", "feature", "approximation", "bounds", "function", "dictionary size", "computer vision", "machine learning community" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=DD2gbWiOgJDmY
https://openreview.net/forum?id=DD2gbWiOgJDmY
ICLR.cc/2013/conference
2013
{ "note_id": [ "EW9REhyYQcESw", "oxSZoe2BGRoB6", "8sJwMe5ZwE8uz" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362202140000, 1362196320000, 1363264440000 ], "note_signatures": [ [ "anonymous reviewer 1024" ], [ "anonymous reviewer 998c" ], [ "Oriol Vinyals, Yangqing Jia, Trevor Darrell" ] ], "structured_content_str": [ "{\"title\": \"review of Why Size Matters: Feature Coding as Nystrom Sampling\", \"review\": \"The authors provide an analysis of the accuracy bounds of feature coding + linear classifier pipelines. They predict an approximate accuracy bound given the dictionary size and correctly estimate the phenomenon observed in the literature where accuracy increases with dictionary size but also saturates.\", \"pros\": [\"Demonstrates limitations of shallow models and analytically justifies the use of deeper models.\"]}", "{\"title\": \"review of Why Size Matters: Feature Coding as Nystrom Sampling\", \"review\": \"This paper presents a theoretical analysis and empirical validation of a novel view of feature extraction systems based on the idea of Nystrom sampling for kernel methods. The main idea is to analyze the kernel matrix for a feature space defined by an off-the-shelf feature extraction system. In such a system, a bound is identified for the error in representing the 'full' dictionary composed of all data points by a Nystrom approximated version (i.e., represented by subsampling the data points randomly). The bound is then extended to show that the approximate kernel matrix obtained using the Nystrom-sampled dictionary is close to the true kernel matrix, and it is argued that the quality of the approximation is a reasonable proxy for the classification error we can expect after training. It is shown that this approximation model qualitatively predicts the monotonic rise in accuracy of feature extraction with larger dictionaries and saturation of performance in experiments.\\n\\nThis is a short paper, but the main idea and analysis are interesting. It is nice to have some theoretical machinery to talk about the empirical finding of rising, saturating performance. In some places I think more detail could have been useful.\\n\\nOne undiscussed point is the fact that many dictionary-learning methods do more than populate the dictionary with exemplars so it's possible that a 'learning' method might do substantially better (perhaps reaching top performance much sooner). This doesn't appear to be terribly important in low-dimensional spaces where sampling strategies work about as well as learning, but could be critical for high-dimensional spaces (where sampling might asymptote much more slowly than learning). It seems worth explaining the limitations of this analysis and how it relates to learning. \\n\\nA few other questions / comments:\\n\\nThe calibration of constants for the bound in the experiments was not clear to me. How is the mapping from the bound (Eq. 2) to classification accuracy actually done?\\n\\nThe empirical validation of the lower bound relies on a calibration procedure that, as I understand it, effectively ends up rescaling a fixed-shape curve to fit observed trend in accuracy on the real problem. As a result, it seems like we could come up with a 'nonsense' bound that happened to have such a shape and then make a similar empirical claim. Is there a way to extend the analysis to rule this out? Or perhaps I misunderstand the origin of the shape of this curve.\", \"pros\": \"(1) A novel view of feature extraction that appears to yield a reasonable explanation for the widely observed performance curves of these methods is presented. I don't know how much profit this view might yield, but perhaps that will be made clear by the 'overshooting' method foreshadowed in the conclusion.\\n(2) A pleasingly short read adequate to cover the main idea. (Though a few more details might be nice.)\", \"cons\": \"(1) How this bound relates to the more common case of 'trained' dictionaries is unclear.\\n(2) The empirical validation shows the basic relationship qualitatively, but it is possible that this does not adequately validate the theoretical ideas and their connection to the observed phenomenon.\"}", "{\"review\": \"We agree with the reviewer regarding the existence of better dictionary learning methods, and note that many of these are also related to corresponding advanced Nystrom sampling methods, such as [Zhang et al. Improved Nystrom low-rank approximation and error analysis. ICML 08]. These methods could improve performance in absolute terms, but that is an orthogonal issue to our main results. Nonetheless, we think this is a valuable observation, and will include a discussion of these points in the final version of this paper.\\n\\nThe relationship between a kernel error bound and classification accuracy is discussed in more detail in [Cortes et al. On the Impact of Kernel Approximation on Learning Accuracy. AISTATS 2010]. The main result is that the bounds are proportional, verifying our empirical claims. We will add this reference to the paper.\\n\\nRegarding the comment on fitting the shape of the curve, we are only using the first two points to fit the 'constants' given in the bound, so the fact that it extrapolates well in many tasks gives us confidence that the bound is accurate.\"}" ] }
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
ICLR.cc/2013/conference
2013
{ "note_id": [ "RzSh7m1KhlzKg", "FFkZF49pZx-pS", "MqwZf2jPZCJ-n", "oo1KoBhzu3CGs", "aplzZcXNokptc", "EW5mE9upmnWp1" ], "note_type": [ "review", "review", "review", "review", "review", "review" ], "note_created": [ 1363574460000, 1362210360000, 1363744920000, 1362192540000, 1363615980000, 1362382860000 ], "note_signatures": [ [ "Hugo Van hamme" ], [ "anonymous reviewer 4322" ], [ "Hugo Van hamme" ], [ "anonymous reviewer 57f3" ], [ "Hugo Van hamme" ], [ "anonymous reviewer 482c" ] ], "structured_content_str": [ "{\"review\": \"I would like to thank the reviewers for their investment of time and effort to formulate their valued comments. The paper was updated according to your comments. Below I address your concerns:\\n\\nA common remark is the lack of comparison with state-of-the-art NMF solvers for Kullback-Leibler divergence (KLD). I compared the performance of the diagonalized Newton algorithm (DNA) with the wide-spread multiplicative updates (MU) exactly because it is the most common baseline and almost every algorithm has been compared against it. As you suggested, I did run comparison tests and I will present the results here. I need to find a method to post some figures to make the point clear. First, I compared against the Cyclic Coordinate Descent (CCD) by Hsieh & Dhillon using the software they provide on their website. I ran the synthetic 1000x500 example (rank 10). The KLD as a function of iteration number for DNA and CCD are very close (I did not find a way to post a plot on this forum). However, in terms of CPU (ran on the machine I mention in the paper) DNA is a lot faster with about 200ms per iteration for CCD and about 50ms for DNA. Note that CCD is completely implemented in C++ (embedded in a mex-file) while DNA is implemented in matlab (with one routine in mex - see the download page mentioned in the paper). As for the comparison with SBCD (scalar block coordinate descent), I also ran their code on the same example, but unfortunately, one of the matrix factors is projected to an all-zero matrix in the first iteration. I have not found the cause yet.\\nWhat definitely needs investigation is that I observe CCD to be 4 times slower than DNA. Using my implementation for MU, 1200 MU iterations are actually as fast as the 100 CCD iteration. (My matlab MU implementation is 10 times faster than the one provided by Hsieh&Dhillon). For these reasons, I am not too keen on quickly including a comparison in terms of CPU time (which is really the bottom line), as implementation issues seem not so trivial. Even more so for a comparison on a GPU, where the picture could be different from the CPU for the cyclic updates in CCD. A thorough comparison on these two architectures seems like a substantial amount of future work. But I hope the data above data convince you the present paper and public code are significant work. \\n\\nReply to Anonymous 57f3\\n' it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)'\\n\\n>> There are practical problems where NMF takes hours, e.g. the problems of [6], which is essentially learning a speech recognizer model from data. We are now applying NMF-based speech recognition in learning paradigms that learn from user interaction examples. In such cases, you want to wait seconds, not minutes. Also, there is an increased interest in 'large-sccale NMF problems'.\\n\\n'Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? '\\n\\n>> Notice that the second and third term in the expression for KLD (Eq. 1) are normalization terms such that we don't require V or Z to sum to unity. This very common in the NMF literature, and was motivated in a.o. [1]. KLD is appropriate if the data follow a (mixture of) Poisson distribution. While this is realistic for counts data (like in the Newsgroup corpus), the KLD is also applied on Fourier spectra, e.g. for speaker separation or speech enhancement, with success. Imho, the relevance of KLD does not need to be motivated in a paper on algorithms, see also [18] and [20] ( numbering in the new paper).\\n\\n'I understand that this is a formulation used in previous work ([1]), but it should be briefly explained. '\\n>> Added a sentence about the Poisson hypothesis after Eq. 1.\\n\\n'You should explain the connection between your work and [17] more carefully. Exactly how is it similar/different? '\\n>> Reformulated. [17] (now [18]) uses a totally different motivation, but also involves the second order derivatives, like a Newton method.\\n\\n'Has a diagonal Newton-type approach ever been used for the squared error objective? '\\n>> A reference is given now. Note however that KLD behaves substantially different.\\n'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)'\\n 'the variables required to compute' -> 'the quantities required to compute' \\n>> corrected\\n\\nYou should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence? \\n>> Yes. A lot better. Corrected.\\n\\n'Have you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal. '\\n>> yes. I initially tried to control the damping by adding lambda*I to the Hessian, where lambda is decreased on success and increased if the KLD increases. I found it difficult to find a setting that worked well on a variety of problems. \\n\\nI would recommend using '\\top' to denote matrix transposition instead of what you are doing. Section 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. \\n>> That's indeed what I meant to say. 'is' => 'can be'\\n\\nYou say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. \\n>> I meant this holds for any solution of (5). This is corrected.\\n\\nAnd I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!). \\n>> A Newton update may yield a guess that does not satisfy this (weak) necessary condition. We can satisfy this condition easily with the renormalization (10), which is reflected in steps 16 and 29.\\n\\nI don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct. 'floor resp. ceiling'??\\n>> 'Therefore' => 'To respect the nonnegativity and to avoid the singularity\\u201d\\n\\nReply to Anonymous 4322\\nSee comparison described above.\\nI added more about the differences with the prior work you mention.\\n\\nReply to Anonymous 482c\\nSee also comparison data detailed above.\\nYou are right there is a lot of generic work on Hessian preconditioning. I refer to papers that work on damping and line search in the context of NMF ([10], [11], [12], [14] ...). Diagonalization is only related in the sense that it ensures the Hessian to be positive definite (not in general, but here is does).\"}", "{\"title\": \"review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization\", \"review\": \"Summary:\\n\\nThe paper presents a new algorithm for solving L1 regularized NMF problems in which the fitting term is the Kullback-Leiber divergence. The strategy combines the classic multiplicative updates with a diagonal approximation of Newton's method for solving the KKT conditions of the NMF optimization problem. This approximation results in a multiplicative update that is computationally light. Since the objective function might increase under the Newton updates, the author proposes to simultaneously compute both multiplicative and Newton updates and choose the one that produces the largest descent. The algorithm is tested on several datasets, generally producing improvements in both number of iterations and computational time with respect to the standard multiplicative updates.\\n\\nI believe that the paper is well written. It proposes an efficient optimization algorithm for solving a problem that is not novel but very important in many applications. The author should highlight the strengths of the proposed approach and the differences with recent works presented in the literature.\\n\\nPros.:\\n\\n- the paper addresses an important problem in matrix factorization,\\nextensively used in audio processing applications\\n- the experimental results show that the method is more efficient than the multiplicative algorithm (which is the most widely used optimization tool), without significantly increasing the algorithmic complexity\", \"cons\": [\"experimental comparisons against related approaches is missing\", \"this approach seems limited to only work for the Kullback-Leiber\", \"divergence as fitting cost.\"], \"general_comments\": \"I believe that the paper lacks of experimental comparisons with other accelerated optimization schemes for solving the same problem. In particular, I believe that the author should include comparisons with [17] and the work,\\n\\nC.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD, pages 1064\\u20131072, 2011.\\n\\nwhich should also be cited.\\n\\nAs the author points out, the approach in [17] is very similar to the one proposed in this paper (they have code available online). The work by Hsieh and Dhillon is also very related to this paper. They propose a coordinate descent method using Newton's method to solve the individual one-variable sub-problems. More details on the differences with these two works should be provided in Section 1.\\n\\nThe experimental setting itself seems convincing. Figures 2 and 3 are never cited in the paper.\"}", "{\"review\": \"First: sorry for the multiple postings. Browser acting weird. Can't remove them ...\", \"update\": \"I was able to get the sbcd code to work. Two mods required (refer to Algorithm 1 in the Li, Lebanon & Park paper - ref [18] in v2 paper on arxiv):\\n1) you have to be careful with initialization. If the estimates for W or H are too large, E = A - WH could potentially contain too many zeros in line 3 and the update maps H to all zeros. Solution: I first perform a multiplicative update on W and H so you have reasonably scaled estimates.\\n2) line 16 is wrongly implemented in the publicly available ffhals5.m \\n\\nI reran the comparison (different machine though - the one I used before was fully loaded):\\n1) CCD (ref [17]) - the c++ code compiled to a matlab mex file as downloaded from the author's website and following their instructions. \\n2) DNA - fully implemented in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/\\n3) SBCD (ref [18]) - code fully in matlab with mods above\\n4) MU (multiplicative updates) - implementation fully in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/\", \"the_kld_as_a_function_of_the_iteration_for_the_rank_10_random_1000x500_matrix_is_shown_in__https\": \"//dl.dropbox.com/u/915791/iteration.pdf.\\nWe observe that SBCD takes a good start but then slows down. DNA is best after the 5th iteration.\", \"the_kld_as_a_function_of_cpu_time_is_shown_in_https\": \"//dl.dropbox.com/u/915791/time.pdf\\nDNA is the clear winner, followed by MU which beats both SBCD and CCD. This may be surprising, but as I mentioned earlier, there are some implementation issues. CCD is a single-thread implementation, while matlab is multi-threaded and works in parrallel. However, the cyclic updates in CCD are not very suitable for parallelization. The SBCD needs reimplementation, honestly.\\n\\nIn summary, DNA does compare favourably to the state-of-the-art, but I don't really feel comfortable about including such a comparison in a scientific paper if there is such a dominant effect of programming style/skills on the result.\"}", "{\"title\": \"review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization\", \"review\": \"This paper develops a new iterative optimization algorithm for performing non-negative matrix factorization, assuming a standard 'KL-divergence' objective function. The method proposed combines the use of a traditional updating scheme ('multiplicative updates' from [1]) in the initial phase of optimization, with a diagonal Newton approach which is automatically switched to when it will help. This switching is accomplished by always computing both updates and taking whichever is best, which will typically be MU at the start and the more rapidly converging (but less stable) Newton method towards the end. Additionally, the diagonal Newton updates are made more stable using a few tricks, some of which are standard and some of which may not be. It is found that this can provide speed-ups which may be mild or significant, depending on the application, versus a standard approach which only uses multiplicative updates. As pointed out by the authors, Newton-type methods have been explored for non-negative matrix factorization before, but not for this particularly objective with a diagonal approximation (except perhaps [17]?).\\n\\nThe writing is rough in a few places but okay overall. The experimental results seem satisfactory compared to the classical algorithm from [1], although comparisons to other potentially more recent approaches is conspicuously absent. I'm not an experiment on matrix factorization or these particular datasets so it's hard for me to independently judge if these results are competitive with state of the art methods.\\n\\nThe paper doesn't seem particularly novel to me, but matrix factorization isn't a topic I find particularly interesting, so this probably biases me against the paper somewhat.\", \"pros\": [\"reasonably well presented\", \"empirical results seem okay\"], \"cons\": [\"comparisons to more recent approaches is lacking\", \"it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)\", \"writing is rough in a few places\"], \"detailed_comments\": \"Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? I understand that this is a formulation used in previous work ([1]), but it should be briefly explained.\\n\\nYou should explain the connection between your work and [17] more carefully. Exactly how is it similar/different?\\n\\nHas a diagonal Newton-type approach ever been used for the squared error objective?\\n\\n'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)'\\n\\n'the variables required to compute' -> 'the quantities required to compute'\\n\\nYou should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence?\\n\\nHave you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal.\\n\\nI would recommend using '\\top' to denote matrix transposition instead of what you are doing.\\n\\nSection 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. Also, are these derivation prior work (e.g. from [1])? If so, it should be stated.\\n\\nIt's hard to follow the derivations in this section. You say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. And I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!).\\n\\nI don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct.\\n\\n'floor resp. ceiling'??\"}", "{\"review\": \"About the comparison with Cyclic Coordinate Descent (as described in C.-J. Hsieh and I. S. Dhillon, \\u201cFast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization,\\u201d in proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), San Diego, CA, USA, August 2011) using their software:\", \"the_plots_of_the_kld_as_a_function_of_iteration_number_and_cpu_time_are_located_at_https\": \"//dl.dropbox.com/u/915791/iteration.pdf and https://dl.dropbox.com/u/915791/time.pdf\\nThe data is the synthetic 1000x500 random matrix of rank 10. They show DNA has comparable convergence behaviour and the implementation is faster, despite it's matlab (DNA) vs. c++ (CCD).\"}", "{\"title\": \"review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization\", \"review\": \"Overview:\\n\\nThis paper proposes an element-wise (diagonal Hessian) Newton method to speed up convergence of the multiplicative update algorithm (MU) for NMF problems. Monotonic progress is guaranteed by an element-wise fall-back mechanism to MU. At a minimal computational overhead, this is shown to be effective in a number of experiments. \\n\\nThe paper is well-written, the experimental validation is convincing, and the author provides detailed pseudocode and a matlab implementation.\", \"comments\": \"There is a large body of related work outside of the NMF field that considers diagonal Hessian preconditioning of updates, going back (at least) as early as Becker & LeCun in 1988.\\n\\nSwitching between EM and Newton update (using whichever is best, element-wise) is an interesting alternative to more classical forms of line search: it may be worth doing a more detailed comparison to such established techniques.\\n\\nI would appreciate a discussion of the potential of extending the idea to non KL-divergence costs.\"}" ] }
qEV_E7oCrKqWT
Zero-Shot Learning Through Cross-Modal Transfer
[ "Richard Socher", "Milind Ganjoo", "Hamsa Sridhar", "Osbert Bastani", "Christopher Manning", "Andrew Y. Ng" ]
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
[ "model", "transfer", "objects", "images", "unseen classes", "work", "training data", "available", "necessary knowledge", "unseen categories" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=qEV_E7oCrKqWT
https://openreview.net/forum?id=qEV_E7oCrKqWT
ICLR.cc/2013/conference
2013
{ "note_id": [ "UgMKgxnHDugHr", "88s34zXWw20My", "ddIxYp60xFd0m", "SSiPd5Rr9bdXm" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362080640000, 1362001800000, 1363754820000, 1363754760000 ], "note_signatures": [ [ "anonymous reviewer cfb0" ], [ "anonymous reviewer 310e" ], [ "Richard Socher" ], [ "Richard Socher" ] ], "structured_content_str": [ "{\"title\": \"review of Zero-Shot Learning Through Cross-Modal Transfer\", \"review\": \"*A brief summary of the paper's contributions, in the context of prior work*\\nThis paper introduces a zero-shot learning approach to image classification. The model first tries to detect whether an image contains an object from a so-far unseen category. If not, the model relies on a regular, state-of-the art supervised classifier to assign the image to known classes. Otherwise, it attempts to identify what this object is, based on a comparison between the image and each unseen class, in a learned joint image/class representation space. The method relies on pre-trained word representations, extracted from unlabelled text, to represent the classes. Experiments evaluate the compromise between classification accuracy on the seen classes and the unseen classes, as a threshold for identifying an unseen class is varied. \\n\\n*An assessment of novelty and quality*\\nThis paper goes beyond the current work on zero-shot learning in 2 ways. First, it shows that very good classification of certain pairs of unseen classes can be achieved based on learned (as opposed to hand designed) representations for these classes. I find this pretty impressive.\\n\\nThe second contribution is in a method for dealing with seen and unseen classes, based on the idea that unseen classes are outliers. I've seen little work attacking directly this issue. Unfortunately, I'm not super impressed with the results: having to drop from 80% to 70% to obtain between 15% and 30% accuracy on unseen classes (and only for certain pairs) is a bit disappointing. But it's a decent first step. Plus, the proposed model is overall fairly simple, and zero-shot learning is quite challenging, so in fact it's perhaps surprising that a simple approach doesn't do worse.\\n\\nFinally, I find the paper reads well and is quite clear in its methodology.\\n\\nI do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'. This sentence suggests there is a theoretical contribution to this work, which I don't see. So I would remove that sentence.\\n\\nAlso, the second paragraph of section 6 is incomplete.\\n\\n*A list of pros and cons (reasons to accept/reject)*\", \"the_pros_are\": [\"attacks an important, very hard problem\", \"goes significantly beyond the current literature on zero-shot learning\", \"some of the results are pretty impressive\"], \"the_cons_are\": [\"model is a bit simple and builds quite a bit on previous work on image classification [6] and unsupervised learning of word representation [15] (but frankly, that's really not such a big deal)\"]}", "{\"title\": \"review of Zero-Shot Learning Through Cross-Modal Transfer\", \"review\": \"- The idea of learning a joint embedding of images and classes is not new, but is nicely explained\\nin the paper.\\n- the authors relate to other works on zero-shot learning. I have not seen references to similarity learning,\\n which can be used to say if two images are of the same class. These can obviously be used to determine\\n if an image is of a known class or not, without having seen any image of the class.\\n- The proposed approach to estimate the probability that an image is of a known class or not is based\\n on a mixture of Gaussians, where one Gaussian is estimated for each known class where the mean is\\n the embedding vector of the class and the standard deviation is estimated on the training samples of\\n that class. I have a few concerns with this:\\n * I wonder if the standard deviation will not be biased (small) since it is estimated on the training\\n samples. How important is that?\\n * I wonder if the threshold does not depend on things like the complexity of the class and the number\\n of training examples of the class. In general, I am not convinced that a single threshold can be used\\n to estimate if a new image is of a new class. I agree it might work for a small number of well\\n separate classes (like CIFAR-10), but I doubt it would work for problems with thousands of classes\\n which obviously are more interconnected to each other.\\n- I did not understand what to do when one decides that an image is of an unknown class. How should it\\n be labeled in that case?\\n- I did not understand why one needs to learn a separate classifier for the known classes, instead of\\n just using the distance to the known classes in the embedding space.\"}", "{\"review\": [\"We thank the reviewers for their feedback.\", \"I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.\", \"Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be?\", \"I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that?\", \"We tried fitting a general covariance matrix and it decreases performance.\", \"I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class.\", \"It might be and we notice that different thresholds should be selected via cross validation.\", \"In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class.\", \"Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions.\", \"I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case?\", \"Using the distances to the word vectors of the unknown classes.\", \"I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.\", \"reply.\", \"The discriminative classifiers have much higher accuracy than the simple distances for known classes.\", \"I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'.\", \"Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon).\"]}", "{\"review\": [\"We thank the reviewers for their feedback.\", \"I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.\", \"Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be?\", \"I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that?\", \"We tried fitting a general covariance matrix and it decreases performance.\", \"I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class.\", \"It might be and we notice that different thresholds should be selected via cross validation.\", \"In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class.\", \"Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions.\", \"I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case?\", \"Using the distances to the word vectors of the unknown classes.\", \"I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.\", \"reply.\", \"The discriminative classifiers have much higher accuracy than the simple distances for known classes.\", \"I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'.\", \"Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon).\"]}" ] }
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
ICLR.cc/2013/conference
2013
{ "note_id": [ "eG1mGYviVwE-r", "EHF-pZ3qwbnAT", "sPw_squDz1sCV", "Rny5iXEwhGnYN", "O3uWBm_J8IOlG", "Av10rQ9sBlhsf", "oCzZPts6ZYo6d", "p7BE8U1NHl8Tr", "zV1YApahdwAIu" ], "note_type": [ "comment", "review", "review", "comment", "comment", "comment", "review", "review", "comment" ], "note_created": [ 1363730760000, 1362609900000, 1363536060000, 1362095760000, 1363731300000, 1363643940000, 1362211680000, 1361997540000, 1362352080000 ], "note_signatures": [ [ "Alan L. Yuille, Roozbeh Mottaghi" ], [ "anonymous reviewer a9e8" ], [ "Aaron Courville" ], [ "Alan L. Yuille, Roozbeh Mottaghi" ], [ "Alan L. Yuille, Roozbeh Mottaghi" ], [ "anonymous reviewer c1e8" ], [ "anonymous reviewer 915e" ], [ "anonymous reviewer c1e8" ], [ "Alan L. Yuille, Roozbeh Mottaghi" ] ], "structured_content_str": [ "{\"reply\": \"Okay, thanks. We understand your viewpoint.\"}", "{\"title\": \"review of Complexity of Representation and Inference in Compositional Models with\\n Part Sharing\", \"review\": \"This paper explores how inference can be done in a part-sharing model and the computational cost of doing so. It relies on 'executive summaries' where each layer only holds approximate information about the layer below. The authors also study the computational complexity of this inference in various settings.\\n\\nI must say I very much like this paper. It proposes a model which combines fast and approximate inference (approximate in the sense that the global description of the scene lacks details) with a slower and exact inference (in the sense that it allows exact inference of the parts of the model). Since I am not familiar with the literature, I cannot however judge the novelty of the work.\", \"pros\": [\"model which attractively combines inference at the top level with inference at the lower levels\", \"the analysis of the computational complexity for varying number of parts and objects is interesting\", \"the work is very conjectural but I'd rather see it acknowledged than hidden under toy experiments.\"], \"cons\": \"\"}", "{\"review\": \"Reviewer c1e8,\\n\\nPlease read the authors' responses to your review. Do they change your evaluation of the paper?\"}", "{\"reply\": \"The unsupervised learning will also appear at ICLR. So we didn't describe it in this paper and concentrated instead on the advantages of compositional models for search after the learning has been done.\\n\\nThe reviewer says that this result is not very novel and mentions analogies to complexity gain of large convolutional networks. This is an interesting direction to explore, but we are unaware of any mathematical analysis of convolutional networks that addresses these issues (please refer us to any papers that we may have missed). Since our analysis draws heavily on properties of compositional models -- explicit parts, executive summary, etc -- we are not sure how our analysis can be applied directly to convolutional networks. Certain aspects of our analysis also are novel to us -- e.g., the sharing of parts, the parallelization. \\n\\nIn summary, although it is plausible that compositional models and convolutional nets have good scaling properties, we are unaware of any other mathematical results demonstrating this.\"}", "{\"reply\": \"Thanks for your comments. The paper is indeed conjectural which is why we are submitting it to this new type of conference. But we have some proof of content from some of our earlier work -- and we are working on developing real world models using these types of ideas.\"}", "{\"reply\": \"Sorry: I should have written 'although I do not see it as very surprising' instead of 'novel'.\\n\\nThe analogy with convolutional networks is that quantities computed by low-level nodes can be shared by several high level nodes. This is trivial in the case of conv. nets, and not trivial in your case because you have to organize the search algorithm in a manner that leverages this sharing.\\n\\nBut I still like your paper because it gives 'a self-contained description of a sophisticated and conceptually sound object recognition system'. Although my personal vantage point makes the complexity result less surprising, the overall achievement is non trivial and absolutely worth publishing.\"}", "{\"title\": \"review of Complexity of Representation and Inference in Compositional Models with\\n Part Sharing\", \"review\": \"This paper presents a complexity analysis of certain inference algorithms for compositional models of images based on part sharing.\\nThe intuition behind these models is that objects are composed of parts and that each of these parts can appear in many different objects; \\nwith sensible parallels (not mentioned explicitly by the authors) to typical sampling sets in image compression and to renormalization concepts in physics via model high-level executive summaries. \\nThe construction of hierarchical part dictionaries is an important and in my appreciation challenging prerequisite, but this is not the subject of the paper. \\n\\nThe authors discuss an approach for object detection and object-position inference exploiting part sharing and dynamic programming, \\nand evaluate its serial and parallel complexity. The paper gathers interesting concepts and presents intuitively-sound theoretical results that could be of interest to the ICLR community.\"}", "{\"title\": \"review of Complexity of Representation and Inference in Compositional Models with\\n Part Sharing\", \"review\": \"The paper describe a compositional object models that take the form of a hierarchical generative models. Both object and part models provide (1) a set of part models, and (2) a generative model essentially describing how parts are composed. A distinctive feature of this model is the ability to support 'part sharing' because the same part model can be used by multiple objects and/or in various points of the object hierarchical description. Recognition is then achieved with a Viterbi search. The central point of the paper is to show how part sharing provides opportunities to reduce the computational complexity of the search because computations can be reused.\\n\\nThis is analogous to the complexity gain of a large convolutional network over a sliding window recognizer of similar architecture. Although I am not surprised by this result, and although I do not see it as very novel, this paper gives a self-contained description of a sophisticated and conceptually sound object recognition system. Stressing the complexity reduction associated with part sharing is smart because the search complexity became a central issue in computer vision. On the other hand, the unsupervised learning of the part decomposition is not described in this paper (reference [19]) and could have been relevant to ICLR.\"}", "{\"reply\": \"We hadn't thought of renormalization or image compression. But renormalization does deal with scale (I think B. Gidas had some papers on this in the 90's). There probably is a relation to image compression which we should explore.\"}" ] }
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
ICLR.cc/2013/conference
2013
{ "note_id": [ "qO9gWZZ1gfqhl", "tG4Zt9xaZ8G5D", "OOB_F66xrPKGA", "Ub0AUfEOKkRO1", "VVbCVyTLqczWn", "2-VeRGGdvD-58" ], "note_type": [ "review", "comment", "comment", "review", "comment", "review" ], "note_created": [ 1362163380000, 1363298100000, 1363297980000, 1362368040000, 1363297440000, 1362213660000 ], "note_signatures": [ [ "anonymous reviewer 777f" ], [ "Camille Couprie" ], [ "Camille Couprie" ], [ "anonymous reviewer 5193" ], [ "Camille Couprie" ], [ "anonymous reviewer 03ba" ] ], "structured_content_str": [ "{\"title\": \"review of Indoor Semantic Segmentation using depth information\", \"review\": \"Segmentation with multi-scale max pooling CNN, applied to indoor vision, using depth information. Interesting paper! Fine results.\", \"question\": \"how does that compare to multi-scale max pooling CNN for a previous award-winning application, namely, segmentation of neuronal membranes (Ciresan et al, NIPS 2012)?\"}", "{\"reply\": \"Thank you for your review and helpful comments. We computed and added error bars as suggested in Table 1. However, computing standard deviation for the individual means per class of objects does not apply here: the per class accuracies are not computed image per image. Each number corresponds to a ratio of the total number of correctly classified pixels as a particular class, on the number of pixels belonging to this class in the dataset.\\nFor the pixel-wise accuracy, we now give the standard deviation in Table 1, as well as the median. As the two variances are equal using depth or not, we computed the statistical significance using a two sample t-test, that results in a t statistic equal to 1.54, which is far from the mean performance of 52.2 and thus we can consider that the two reported means are statistically significant. \\n\\nAbout the class-by class improvements displayed in Table 1, we discuss the fact that objects having a constant appearance of depth are in general more inclined to take benefit from depth information. As the major part of the scenes contains categories that respect this property, the improvements achieved using depth involve a smaller number of categories, but a larger volume of data. \\n\\nTo strengthen our comparison of the two networks using or not depth information, we now display the results obtained using only the multiscale network without depth information in Figure 2. \\n\\nWe hope that the changes that we made in the paper (which should be updated within the next 24 hours) answer your concerns.\"}", "{\"reply\": \"Thank you for your review and helpful comments.\\nThe missing values in the depth acquisition were pre-processed using inpainting code available online on Nathan Siberman\\u2019s web page. We added the reference to the paper.\\n In the paper, we made the observation that the classes for which depth fails to outperform the RGB model are the classes of object for which the depth map does not vary too much. We now stress out better this observation with the addition of some depth maps at Figure 2. \\n\\nThe question you are raising about whether or not the depth is always useful, or if there could be better ways to leverage depth data is a very good question, and at the moment is still un-answered. The current RGBD multiscale network is the best way we found to learn features using depth, now maybe we could improve the system by introducing an appropriate contrast normalization of the depth map, or maybe we could combine the learned features using RGB and the learned features using RGBD\\u2026\"}", "{\"title\": \"review of Indoor Semantic Segmentation using depth information\", \"review\": \"This work builds on recent object-segmentation work by Farabet et al., by augmenting the pixel-processing pathways with ones that processes a depth map from a Kinect RGBD camera. This work seems to me a well-motivated and natural extension now that RGBD sensors are readily available.\\n\\nThe incremental value of the depth channel is not entirely clear from this paper. In principle, the depth information should be valuable. However, Table 1 shows that for the majority of object types, the network that ignores depth is actually more accurate. Although the averages at the bottom of Table 1 show that depth-enhanced segmentation is slightly better, I suspect that if those averages included error bars (and they should), the difference would be insignificant. In fact, all the accuracies in Table 1 should have error bars on them. The comparisons with the work of Silberman et al. are more favorable to the proposed model, but again, the comparison would be strengthened by discussion of statistical confidence.\\n\\nQualitatively, I would have liked to see the ouput from the convolutional network of Farabet et al. without the depth channel, as a point of comparison in Figures 2 and 3. Without that point of comparison, Figures 2 and 3 are difficult to interpret as supporting evidence for the model using depth.\\n\\nPro(s)\\n- establishes baseline RGBD results with convolutional networks\\n \\nCon(s)\\n- quantitative results lack confidence intervals\\n- qualitative results missing important comparison to non-rgbd network\"}", "{\"reply\": \"Thank you for your review and pointing out the paper of Ciresan et al., that we added to our list of references. Similarly to us, they apply the idea of using a kind of multi-scale network. However, Ciseran's approach to foveation differs from ours: where we use a multiscale pyramid to provide a foveated input to the network, they artificially blur the input's content, radially, and use non-uniform sampling to connect the network to it. The major advantage of using a pyramid is that the whole pyramid can be applied convolutionally, to larger input sizes. Once the model is trained, it must be applied as a sliding window to classify each pixel in the input. Using their method, which requires a radial blur centered on each pixel, the model cannot be applied convolutionally. This is a major difference, which dramatically impacts test time.\", \"note\": \"Ciseran's 2012 NIPS paper appeared after our first paper (ICML 2012) on the subject.\"}", "{\"title\": \"review of Indoor Semantic Segmentation using depth information\", \"review\": \"This work applies convolutional neural networks to the task of RGB-D indoor scene segmentation. The authors previously evaulated the same multi-scale conv net architecture on the data using only RGB information, this work demonstrates that for most segmentation classes providing depth information to the conv net increases performance.\\n\\nThe model simply adds depth as a separate channel to the existing RGB channels in a conv net. Depth has some unique properties e.g. infinity / missing values depending on the sensor. It would be nice to see some consideration or experiments on how to properly integrate depth data into the existing model. \\n\\nThe experiments demonstrate that a conv net using depth information is competitive on the datasets evaluated. However, it is surprising that the model leveraging depth is not better in all cases. Discussion on where the RGB-D model fails to outperform the RGB only model would be a great contribution to add. This is especially apparent in table 1. Does this suggest that depth isn't always useful, or that there could be better ways to leverage depth data?\", \"minor_notes\": \"'modalityies' misspelled on page 1\", \"overall\": [\"A straightforward application of conv nets to RGB-D data, yielding fairly good results\", \"More discussion on why depth fails to improve performance compared to an RGB only model would strengthen the experimental findings\"]}" ] }
OpvgONa-3WODz
Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
[ "Guillaume Desjardins", "Razvan Pascanu", "Aaron Courville", "Yoshua Bengio" ]
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive.
[ "natural gradient", "boltzmann machines", "mfng", "algorithm", "similar", "spirit", "martens", "algorithm belongs", "family", "truncated newton methods" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=OpvgONa-3WODz
https://openreview.net/forum?id=OpvgONa-3WODz
ICLR.cc/2013/conference
2013
{ "note_id": [ "LkyqLtotdQLG4", "o5qvoxIkjTokQ", "dt6KtywBaEvBC", "pC-4pGPkfMnuQ" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362012600000, 1362294960000, 1362379800000, 1363459200000 ], "note_signatures": [ [ "anonymous reviewer 9212" ], [ "anonymous reviewer 7e2e" ], [ "anonymous reviewer 77a7" ], [ "Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio" ] ], "structured_content_str": [ "{\"title\": \"review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines\", \"review\": \"The paper describes a Natural Gradient technique to train Boltzman machines. This is essentially the approach of Amari et al (1992) where the Fisher information matrix is expressed in which the authors estimate the Fisher information matrix L with examples sampled from the model distribution using a MCMC approach with multiple chains. The gradient g is estimated from minibatches, and the weight update x is obtained by solving Lx=g with an efficient truncated algorithm. Doing so naively would be very costly because the matrix L is large. The trick is to express L as the covariance of the Jacobian S with respect to the model distribution and take advantage of the linear nature of the sample average to estimate the product Lw in a manner than only requires the storage of the Jacobien for each sample.\\n\\nThis is a neat idea. The empirical results are preliminary but show promise. The proposed algorithm requires less iterations but more wall-clock time than SML. Whether this is due to intrinsic properties of the algorithm or to deficiencies of the current implementation is not clear.\"}", "{\"title\": \"review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines\", \"review\": \"This paper presents a natural gradient algorithm for deep Boltzmann machines. The authors must be commended for their extremely clear and succinct description of the natural gradient method in Section 2. This presentation is particularly useful because, indeed, many of the papers on information geometry are hard to follow. The derivations are also correct and sound. The derivations in the appendix are classical statistics results, but their addition is likely to improve readability of the paper.\\n\\nThe experiments show that the natural gradient approach does better than stochastic maximum likelihood when plotting estimated likelihood against epochs. However, per unit computation, the stochastic maximum likelihood method still does better. \\n\\nI was not able to understand remark 4 about mini-batches. Why are more parallel chains needed? Why not simply use a single chain but have longer memory. I strongly think this part of the paper could be improved if the authors write down the pseudo-code for their algorithm. Another suggestion is to use automatic algorithm configuration to find the optimal hyper-parameters for each method, given that they are so close.\\n\\nThe trade-offs of second order versus first order optimization methods are well known in the deterministic case. There is is also some theoretical guidance for the stochastic case. I encourage the authors to look at the following papers for this:\\n\\nA Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. N. Le Roux, M. Schmidt, F. Bach. NIPS, 2012. \\n\\nHybrid Deterministic-Stochastic Methods for Data Fitting.\\nM. Friedlander, M. Schmidt. SISC, 2012. \\n\\n'On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning' R. Byrd, G. Chin and W. Neveitt, J. Nocedal.\\nSIAM J. on Optimization, vol 21, issue 3, pages 977-995 (2011).\\n\\n'Sample Size Selection in Optimization Methods for Machine Learning'\\nR. Byrd, G. Chin, J. Nocedal and Y. Wu. to appear in Mathematical Programming B (2012).\\n\\nIn practical terms, given that the methods are so close, how does the choice of implementation (GPUs, multi-cores, single machine) affect the comparison? Also, how data dependent are the results. I would be nice to gain a deeper understanding of the conditions under which the natural gradient might or might not work better than stochastic maximum likelihood when training Boltzmann machines.\\n\\nFinally, I would like to point out a few typos to assist in improving the paper:\", \"page_1\": \"litterature should be literature\\nSection 2.2 cte should be const for consistency.\", \"section_3\": \"Avoid using x instead of grad_N in the linear equation for Lx=E(.) This causes overloading. For consistency with the previous section, please use grad_N instead.\", \"section_4\": \"Add a space between MNIST and [7].\\nAppendix 5.1: State that the expectation is with respect to p_{\\theta}(x).\\nAppendix 5.2: The expectation with respect to q_\\theta should be with respect to p_{\\theta}(x) to ensure consistency of notation, and correctness in this case.\", \"references\": \"References [8] and [9] appear to be duplicates of the same paper by J. Martens.\"}", "{\"title\": \"review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines\", \"review\": \"This paper introduces a new gradient descent algorithm that combines is based on Hessian-free optimization, but replaces the approximate Hessian-vector product by an approximate Fisher information matrix-vector product. It is used to train a DBM, faster than the baseline algorithm in terms of epochs needed, but at the cost of a computational slowdown (about a factor 30). The paper is well-written, the algorithm is novel, although not fundamentally so.\\n\\nIn terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick. Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better).\", \"other_comments\": [\"Remove the phrase 'first principles', it is not applicable here.\", \"Is there a good reason to limit section 2.1 to a discrete and bounded domain X?\", \"I'm not a big fan of the naming a method whose essential ingredient is a metric 'Metric-free' (I know Martens did the same, but it's even less appropriate here).\", \"I doubt the derivation in appendix 5.1 is a new result, could be omitted.\", \"Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported.\", \"Results should be averaged over multiple runs, and error-bars given.\", \"The authors could clarify how the algorithm complexity scales with problem dimension, and where the computational bottleneck lies, to help the reader judge its promise beyond the current results.\", \"A pity that it took longer than 6 weeks for the promised 'next revision', I had hoped the authors might resolve some of the self-identified weaknesses in the meanwhile.\"]}", "{\"review\": \"Thank you to the reviewers for the helpful feedback. The provided references will no doubt come in handy for future work.\", \"to_all_reviewers\": \"In an effort to speedup run time, we have re-implemented a significant portion of the MFNG algorithm. This resulted in large speedups for the diagonal approximation of MFNG, and all around lower memory consumption. Unfortunately, this has delayed the submission of a new manuscript, which is still under preparation. The focus of this new revision will be on:\\n(1) reporting mean and standard deviations of Fig.1 across multiple seeds.\\n(2) a more careful use of damping and the use of annealed learning rates.\\n(3) results on a second dataset, and hopefully a second model family (Gaussian RBMs).\\n\\nIn the meantime, we have uploaded a new version which aims to clarify and provide additional technical details, where the reviewers had found it necessary. The main modifications are:\\n* a new algorithmic description of MFNG\\n* a new graph which analyzes runtime performance of the algorithm, breaking down the run-time performance between the various steps of the algorithm (sampling, gradient computation, matrix-vector product, and MinRes iterations).\\nThe paper should appear shortly on arXiv, and can be accessed here in the meantime:\", \"http\": \"//brainlogging.files.wordpress.com/2013/03/iclr2013_submission1.pdf\\n\\nAn open-source implementation of MFNG can be accessed at the following URL.\", \"https\": \"//github.com/gdesjardins/MFNG.git\", \"to_anonymous_7e2e\": \"There are numerous advantages to sampling from parallel chains (with fewer Gibbs steps between samples), compared to using consecutive (or sub-sampled) samples generated by a single Markov chain. First, running multiple chains guarantees that the samples are independent. Running a single chain will no doubt result in correlated samples which will negatively impact our estimates of the gradient and the metric. Second, simulating multiple chains is an implicitly parallel process, which can be implemented efficiently on both CPU and GPU (especially so on GPU). The downside however is in increase in memory consumption.\", \"to_anonymous_77a7\": \">> In terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick.\\n\\nSince ours is a natural gradient method, it attenuates the effect of ill-conditioned probability manifolds (expected hessian of log Z, under the model distribution), not ill-conditioning of the expected hessian (under the empirical distribution). It is thus possible that centering addresses the latter form of ill-conditioning. Another hypothesis is that centering provides a better initialization point, around which the natural gradient metric is better-conditioned and thus easier to invert. More experiments are required to answer these questions.\\n\\n>> Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better).\\n\\nOur source code and algorithmic description should shed some light on this issue. The only 'trick' we currently use is a fixed damping coefficient along the diagonal, to improve conditioning and speed up convergence of our solver. Alternative forms of initialization and preconditioning were not used in the experiments.\\n\\n>> Is there a good reason to limit section 2.1 to a discrete and bounded domain chi?\\n\\nThese limitations mostly reflect our interest with Boltzmann Machines. Generalizing these results to unbounded domains (or continuous variables) remains to be investigated.\\n\\n>> Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported.\\n\\nThe results of our grid-search have been added to the caption of Figure 1.\"}" ] }
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
ICLR.cc/2013/conference
2013
{ "note_id": [ "d6u7vbCNJV6Q8", "Xu4KaWxqIDurf", "00ZvUXp_e10_E", "iiUe8HAsepist", "EEhwkCLtAuko7", "o1YP1AMjPx1jv", "XTZrXGh8rENYB", "Za8LX-xwgqXw5", "3vEUvBbCrO8cu" ], "note_type": [ "review", "review", "comment", "comment", "review", "comment", "comment", "review", "review" ], "note_created": [ 1361968020000, 1363393200000, 1363392660000, 1363392180000, 1362405300000, 1363393020000, 1363393320000, 1362498780000, 1363392960000 ], "note_signatures": [ [ "anonymous reviewer ac47" ], [ "Rakesh Chalasani, Jose C. Principe" ], [ "Rakesh Chalasani, Jose C. Principe" ], [ "Rakesh Chalasani, Jose C. Principe" ], [ "anonymous reviewer 62ac" ], [ "Rakesh Chalasani, Jose C. Principe" ], [ "Rakesh Chalasani" ], [ "anonymous reviewer 1829" ], [ "Rakesh Chalasani, Jose C. Principe" ] ], "structured_content_str": [ "{\"title\": \"review of Deep Predictive Coding Networks\", \"review\": \"Deep predictive coding networks\\n\\nThis paper introduces a new model which combines bottom-up, top-down, and temporal information to learning a generative model in an unsupervised fashion on videos. The model is formulated in terms of states, which carry temporal consistency information between time steps, and causes which are the latent variables inferred from the input image that attempt to explain what is in the image.\", \"pros\": \"Somewhat interesting filters are learned in the second layer of the model, though these have been shown in prior work.\\n\\nNoise reduction on the toy images seems reasonable.\", \"cons\": \"The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.\\n\\nThe noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity. The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.\\n\\nFigure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.\\n\\nIt is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.\\n\\nWhere did z come from in equation 11?\\n\\nIt is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.\", \"novelty_and_quality\": \"The paper introduces a novel extension to hierarchical sparse coding method by incorporating temporal information at each layer of the model. The poor explanation of this relatively simple idea holds the paper back slightly.\"}", "{\"review\": \"The revised paper is uploaded onto arXiv. It will be announced on 18th March.\\n\\nIn the mean time, the paper is also made available at\", \"https\": \"//www.dropbox.com/s/klmpu482q6nt1ws/DPCN.pdf\"}", "{\"reply\": \"Thank you for you review and comments, particularly for pointing out some mistakes in the paper. Following is our response to some concerns you have raised.\\n\\n>>> 'You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?'\\n\\nWe use the generalized state-space equations in Eq.1 and Eq.2 to motivate the relation between the proposed model and dynamic networks. However, please note that it is difficult to state the explicit form of F and G, since sparsity constraint even on a linear dynamical system leads to a non-linear mapping between the observations and the states.\\n\\n>>> 'In Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.'\\n\\nAgreed. We made appropriate changes in the revised paper.\\n\\n>>> The relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t)\\n\\nWe made this explicit in the revised paper. Eq.3 represents the energy function for inferring the x_t with fixed u_t and Eq.4 represents the energy function for inferring the u_t with fixed x_t. In order to be more clear, we now wrote a unified energy function (Eq. 5) from which we jointly infer both x_t and u_t. \\n\\n>>> 'Section 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.'\\n\\nAgreed, this statement is incorrect and is removed.\\n\\n>>> 'Section 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?'\\n\\nPlease note that the states from the 4 overlapping patches are pooled using a non-linear function (sum of the absolute value of the state vectors). Hence, the output is no longer a linear mapping.\\n\\n>>> 'Section 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?'\\n\\nNo, we simply consider the concatenated sequence as a single video. This is made more clear in the paper.\"}", "{\"reply\": \"Thank you for your review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.\\n\\n>>> 'The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.'\\n\\nWe have made major changes to the paper to improve the presentation of the model. Hopefully the newer version will make the explanation more clear.\", \"we_would_also_like_to_emphasis_that__the_paper_makes_two_important_contributions\": \"(1) as you have pointed out, introduces sparse coding in dynamical models and solves it using a novel inference procedure similar to ISTA. (2) considers top-down information while performing inference in the hierarchical model.\\n\\n>>> 'The noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity.'\\n\\nWe agree with you that it would strengthen our arguments by showing denoising on large images or videos. However, to scale this model to large images require convolutional network like model. This is an on going work and we are presently developing a convolutional model for DPCN.\\n\\n>>> 'The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.'\\n\\nCorrected. The points between the clusters appear because we enforce temporal coherence on the causes belonging two consecutive frames at the top layer (see Section 2.4). It is not due to gradual change in the sequences, as said previously.\\n\\n>>> 'Figure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.'\\n\\nCorrected. Also, a new figure is added to bring more clarity.\\n\\n>>> 'It is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.'\\n\\nPlease see the supplementary material, section A.4 for visualization of the first layer parameters A, B and C. Also, please note that the Figure. 2 shows the visualization of the invariant matrices, B, in a two-layered network. These are obtained by taking the linear combination of Gabor like filters in C^(1) (see Figure .6) and hence, represent more complex structures. This is made more clear in the paper.\\n\\n>>> 'Where did z come from in equation 11?'\\n\\nCorrected. It is the Gaussian transition noise over the parameters.\\n\\n>>> 'It is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.'\\n\\nWe say the states are more temporally 'consistent' to indicate that they are more stable than sparse coding, particularly in high sparsity conditions, because they have to maintain the temporal dependencies. On the other hand, we agree with you that the causes are more invariant to changes in the input and hence, are temporally 'coherent'.\"}", "{\"title\": \"review of Deep Predictive Coding Networks\", \"review\": \"This paper attempts to capture both the temporal dynamics of signals and the contribution of top down connections for inference using a deep model. The experimental results are qualitatively encouraging, and the model structure seems like a sensible direction to pursue. I like the connection to dynamical systems. The mathematical presentation is disorganized though, and it would have been nice to see some sort of benchmark or externally meaningful quantitative comparison in the experimental results.\", \"more_specific_comments\": \"You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?\\n\\nIn Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.\\n\\nThe relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t)\\n\\nSection 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.\\n\\nSection 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?\\n\\nSection 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?\"}", "{\"reply\": \"Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.\\n\\n>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model' \\n\\nWe made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper. \\n\\n>>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.'\\n\\nWe agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN. \\n\\n>>>'In the beginning of the section 2.1, please define P, D, K to improve clarity. \\n>>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable. \\n>>> What is z_{t} in Equation 11?' \\n\\nCorrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters. \\n\\n>>> 'In Section 2.2, its not clear how u_{hat} is computed. ' \\n\\nThis is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}.\"}", "{\"reply\": \"This is in reply to reviewer 1829, mistakenly pasted here. Please ignore.\"}", "{\"title\": \"review of Deep Predictive Coding Networks\", \"review\": \"A brief summary of the paper's contributions, in the context of prior work.\\nThe paper proposes a hierarchical sparse generative model in the context of a dynamical system. The model can capture temporal dependencies in time-varying data, and top-down information (from high-level contextual/causal units) can modulate the states and observations in lower layers. \\n\\nExperiments were conducted on a natural video dataset, and on a synthetic video dataset with moving geometric shapes. On the natural video dataset, the learned receptive fields represent edge detectors in the first layer, and higher-level concepts such as corners and junctions in the second layer. In the synthetic sequence dataset, hierarchical top-down inference is used to robustly infer about \\u201ccausal\\u201d units associated with object shapes.\\n\\n\\nAn assessment of novelty and quality.\\nThis work can be viewed as a novel extension of hierarchical sparse coding to temporal data. Specifically, it is interesting to see how to incorporate dynamical systems into sparse hierarchical models (that alternate between state units and causal units), and how the model can perform bottom-up/top-down inference. The use of Nestrov\\u2019s method to approximate the non-smooth state transition terms in equation 5 is interesting.\\n\\nThe clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model (also, see comments below). \\n\\nThe experimental results (identifying high-level causes from corrupted temporal data) seem quite reasonable on the synthetic dataset. However, the results are all too qualitative. The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.\", \"other_questions_and_comments\": [\"In the beginning of the section 2.1, please define P, D, K to improve clarity.\", \"In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.\", \"What is z_{t} in Equation 11?\", \"In Section 2.2, it\\u2019s not clear how u_hat is computed.\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": [\"The formulation and the proposed solution are technically interesting.\", \"Experimental results on a synthetic video data set provide a proof-of-concept demonstration.\"], \"cons\": [\"The significance of the experiments is quite limited. There is no empirical comparison to other models on real tasks.\", \"Inference seems to be complicated and computationally expensive.\", \"Unclear presentation\"]}", "{\"review\": \"Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.\\n\\n>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model'\\n\\nWe made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper.\\n\\n>>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.'\\n\\nWe agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN. \\n\\n>>>'In the beginning of the section 2.1, please define P, D, K to improve clarity. \\n>>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.\\n>>> What is z_{t} in Equation 11?'\\n\\nCorrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters. \\n\\n>>> 'In Section 2.2, its not clear how u_{hat} is computed. '\\n\\nThis is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}.\"}" ] }
zzEf5eKLmAG0o
Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums
[ "YoonSeop Kang", "Seungjin Choi" ]
We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods.
[ "features", "exponential family harmoniums", "graphical model", "feature extraction", "structure", "better representation", "data distribution", "model", "harmonium", "parameters" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=zzEf5eKLmAG0o
https://openreview.net/forum?id=zzEf5eKLmAG0o
ICLR.cc/2013/conference
2013
{ "note_id": [ "UUlHmZjBOIUBb", "tt7CtuzeCYt5H", "qqdsq7GUspqD2", "DNKnDqeVJmgPF" ], "note_type": [ "review", "comment", "comment", "review" ], "note_created": [ 1362353160000, 1363857240000, 1363857540000, 1360866060000 ], "note_signatures": [ [ "anonymous reviewer d966" ], [ "YoonSeop Kang" ], [ "YoonSeop Kang" ], [ "anonymous reviewer 0e7e" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Features with Structure-Adapting Multi-view Exponential Family\\n Harmoniums\", \"review\": \"The paper introduces an new algorithm for simultaneously learning a hidden layer (latent representation) for multiple data views as well as automatically segmenting that hidden layer into shared and view-specific nodes. It builds on the previous multi-view harmonium (MVH) algorithm by adding (sigmoidal) switch parameters that turn a connection on or off between a view and hidden node and uses gradient descent to learn those switch parameters. The optimization is similar to MVH, with a slight modification on the joint distribution between views and hidden nodes, resulting in a change in the gradients for all parameters and a new switch variable to descend on.\\n\\nThis new algorithm, therefore, is somewhat novel; the quality of the explanation and writing is high; and the experimental quality is reasonable.\\n\\nPros\\n\\n1. The paper is well-written and organized.\\n\\n2. The algorithm in the paper proposes a way to avoid hand designing shared and private (view-specific) nodes, which is an important contribution.\\n\\n3. The experimental results indicate some interesting properties of the algorithm, in particular demonstrating that the algorithm extracts reasonable shared and view-specific hidden nodes.\\n\\nCons\\n1. The descent directions have W and the switch parameters, s_kj, coupled, which might make learning slow. Experimental results should indicate computation time.\\n\\n2. The results do not have error bars (in Table 1), so it is unclear if they are statistically significant (the small difference suggests that they may not be).\\n\\n3. The motivation in this paper is to enable learning of the private and shared representations automatically. However, DWH (only a shared representation) actually seems to perform generally better that MVH (shared and private). The experiments should better explore this question. It might also be a good idea to have a baseline comparison with CCA. \\n\\n4. In light of Con (3), the algorithm should also be compared to multi-view algorithms that learn only shared representations but do not require the size of the hidden-node set to be fixed (such as the recent relaxed-rank convex multi-view approach in 'Convex Multiview Subspace Learning', M. White, Y. Yu, X. Zhang and D. Schuurmans, NIPS 2012). In this case, the relaxed-rank regularizer does not fix the size of the hidden node set, but regularizes to set several hidden nodes to zero. This is similar to the approach proposed in this paper where a node is not used if the sigmoid value is < 0.5. \\nNote that these relaxed-rank approaches do not explicitly maximize the likelihood for an exponential family distribution; instead, they allow general Bregman divergences which have been shown to have a one-to-one correspondence with exponential family distributions (see 'Clustering with Bregman divergences' A. Banerjee, S. Merugu, I. Dhillon and J. Ghosh, JMLR 2005). Therefore, by selecting a certain Bregman divergence, the approach in this paper can be compared to the relaxed-rank approaches.\"}", "{\"reply\": \"1. The distribution of sigma(s_{kj}) had modes near 0 and 1, but the graph of the distribution was omitted due to the space constraints. The amount of separation between modes were affected by the hyperparameters that were not mentioned in the paper.\\n\\n2. It is true that the separation between digit features and noises in our model is not perfect. But it is also true that view-specific features contain more noisy features than the shared ones. \\nWe appreciate your suggestions about the additional experiments about de-noising digits, and we will present the result of the experiments if we get a chance.\"}", "{\"reply\": \"1. As the switch parameters converge quickly, the training time of our model was not very different from that of DWH.\\n2. We performed the experiment several times, but the result was consistent. Still, it is our fault that we didn't repeat the experiments enough to add error bars to the results.\\n3. MVHs are often outperformed by DWHs unless the sizes of latent node sets are not carefully chosen, and this is one of the most important reason for introducing switch parameters. To make our motivation clear, we assigned 50% of hidden nodes as shared, and evenly assigned the rest of hidden nodes as visible nodes for view-specific nodes of each view. We didn't compare our method to CCA, because we thought DWH would be a better example of models with only a shared representation.\\n4. We were not aware of the White et al.'s work when we submitted our work, and therefore couldn't make comparison with their model.\"}", "{\"title\": \"review of Learning Features with Structure-Adapting Multi-view Exponential Family\\n Harmoniums\", \"review\": \"The authors propose a bipartite, undirected graphical model for multiview learning, called structure-adapting multiview harmonimum (SA-MVH). The model is based on their earlier model called multiview harmonium (MVH) (Kang&Choi, 2011) where hidden units were separated into a shared set and view-specific sets. Unlike MVH which explicitly restricts edges, the visible and hidden units in the proposed SA-MVH are fully connected to each other with switch parameters s_{kj} indicating how likely the j-th hidden unit corresponds to the k-th view.\\n\\nIt would have been better if the distribution of s_{kj}'s (or sigma(s_{kj})) was provided. Unless the distribution has clear modes near 0 and 1, it would be difficult to tell why this approach of learning w^{(k)}_{ij} and s_{kj} separately is better than just learning \\tilde{w}^{(k)}_{ij} = w^{(k)}_{ij} sigma s_{kj} all together (as in dual-wing harmonium, DWH). Though, the empirical results (experiment 2) show that the features extracted by SA-MVH outperform both MVH and DWH.\\n\\nThe visualizations of shared and view-specific features from the first experiment do not seem to clearly show the power of the proposed method. For instance, it's difficult to say that the filters of roman digits from the shared features do seem to have horizontal noise. It would be better to try some other tasks with the trained model. Would it be possible to sample clean digits (without horizontal or vertical noise) from the model if the view-speific features were forced off? Would it be possible to denoise the corrupted digits? and so on..\", \"typo\": [\"Fig. 1 (c): sigma(s_{1j}) and sigma(s_{2j})\"]}" ] }
mLr3In-nbamNu
Local Component Analysis
[ "Nicolas Le Roux", "Francis Bach" ]
Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting.
[ "parzen windows", "kernel", "metrics", "popular density estimation", "outlier detection", "clustering", "multivariate data", "performance", "reliant" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=mLr3In-nbamNu
https://openreview.net/forum?id=mLr3In-nbamNu
ICLR.cc/2013/conference
2013
{ "note_id": [ "D1cO7TgVjPGT9", "pRFvp6BDvn46c", "iGfW_jMjFAoZQ", "c2pVc0PtwzcEK" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1361300640000, 1362491220000, 1362428640000, 1364253000000 ], "note_signatures": [ [ "anonymous reviewer 71f4" ], [ "anonymous reviewer 61c0" ], [ "anonymous reviewer 18ca" ], [ "Nicolas Le Roux, Francis Bach" ] ], "structured_content_str": [ "{\"title\": \"review of Local Component Analysis\", \"review\": \"In this paper, the authors consider unsupervised metric learning as a\\ndensity estimation problem with a Parzen windows estimator based on \\nEuclidean metric. They use maximum likelihood method and EM algorithm\\nfor deriving a method that may be considered as an unsupervised counterpart to neighbourhood component analysis. Various versions of the method provide good results in the clustering problems considered.\\n\\n+ Good and interesting conference paper.\\n+ Certainly novel enough.\\n- Modifications are needed to combat the problems of overfitting,\\nlocal minima, and computational load in the basic approach proposed.\\nSome of these improvements are heuristic or seem to require hand-tuning.\", \"specific_comments\": \"- The authors should refer to the paper S. Kaski and J. Peltonen,\\n'Informative discriminant analysis', in T. Fawcett and N. Mishna (Eds.),\\nProc. of the 20th Int. Conf. on Machine Learning (ICML 2003), pp. 329-336,\\nAAAI Press, Menlo Park, CA, 2003.\\nIn this paper, essentially the same technique as Neighbourhood Component\\nAnalysis is defined under the name Informative discriminant analysis\\none year prior to the paper by Goldberger et al., your reference [16].\\n\\n- In the beginning of page 6 the authors state: 'Following [1, 2], the data\\nis progressively corrupted by adding dimensions of white Gaussian noise,\\nthen whitened.' In this case, whitening amplifies Gaussian noise, so that\\nit has the same power as the underlying data. Obviously this is the reason\\nwhy the experimental results approach to a random guess when the dimensions of the white noise increase sufficiently. The authors should mention that in real-world applications, one should not use whitening in this kind of situations, but rather compress the data using for example principal component analysis (PCA) without whitening for getting rid of the extra dimensions corresponding to white Gaussian noise. Or at least use the data as such without any whitening.\"}", "{\"title\": \"review of Local Component Analysis\", \"review\": \"Summary of contributions:\\nThe paper presents a robust algorithm for density estimation. The main idea is to model the density into a product of two independent distributions: one from a Parzen windows estimation (for modeling a low dimensional manifold) and the other from a Gaussian distribution (for modeling noise). Specifically, leave-one-out log-likelihood is used as the objective function of Parzen window estimator, and the joint model can be optimized using Expectation Maximization algorithm. In addition, the paper presents an analytical solution for M-step using eigen-decomposition. The authors also propose several heuristics to address local optima problems and to improve computational efficiency. The experimental results on synthetic data show that the proposed algorithm is indeed robust to noise.\", \"assessment_on_novelty_and_quality\": \"\", \"novelty\": \"This paper seems to be novel. The main ideas (using leave-one-out log-likelihood and decomposing the density as a product of Parzen windows estimator and a Gaussian distribution) are very interesting.\", \"quality\": \"The paper is clearly written. The method is well motivated, and the technical solutions are quite elegant and clearly described. The paper also presents important practical tips on addressing local optima problems and speeding up the algorithm. \\n\\nIn experiments, the proposed algorithm works well when noise dimensions increase in the data. The experiments are reasonably convincing, but they are limited to very low-dimensional, toy data. Evaluation on more real-world datasets would have been much more compelling. Without such evaluation, it\\u2019s unclear how the proposed method will perform on real data.\\n\\nAlthough interesting, the assumption about modeling the data density as a product of two independent distributions can be too strong and unrealistic. For example, how can this model handle the cases when noise are added to the low-dimensional manifold, not as orthogonal \\u201cnoise dimension\\u201d?\", \"other_comments\": [\"Figure 1 is not very interesting since even NCA will learn near-isotropic covariance, and the baseline method seems to be PCA whitening, not PCA.\"], \"pros_and_cons\": \"\", \"pros\": [\"The paper seems sufficiently novel.\", \"The main approach and solution are technically interesting.\", \"The experiments show proof-of-concept (albeit limited) demonstration that the proposed method is robust to noise dimensions (or irrelevant features).\"], \"cons\": [\"The experiments are limited to very low-dimensional, toy datasets. Evaluation on more real-world datasets would have been much more compelling. Without such evaluation, it\\u2019s unclear how the proposed method will perform on real data.\", \"The assumption about modeling the data density as a product of two independent distributions can be too strong and unrealistic (see comments above).\"]}", "{\"title\": \"review of Local Component Analysis\", \"review\": \"Summary of contributions:\\n1. The paper proposed an unsupervised local component analysis (LCA) framework that estimates the Parzen window covariance via maximizing the leave-one-out density. The basic algorithm is an EM procedure with closed form updates. \\n\\n2. One further extension of LCA was introduced, which assumes two multiplicative densities, one is Parzen window (non Gaussian) and the other is a global Gaussian distribution. \\n\\n3. Algorithms was designed to scale up the algorithms to large data sets.\", \"assessment_of_novelty_and_quality\": \"The work looks quite reasonable. But the approach seems to be a bit straightforward. The work is perhaps not very deep or inspiring. \\n\\nMy major concern is, other than the described problem setting being tackled, mostly toy problems, I don't see the significance of the work for addressing major machine learning challenges. For example, the authors argued the approach might be a good preprocessing step, but in the experiments, there is nothing like improving machine learning (e.g. classification) via such a pre-processing of data. \\n\\nIt's disappointing to see that the authors didn't study the identifiability of the Parzen/Gaussian model. Addressing this issue should have been a good chance to show some depth of the research.\"}", "{\"review\": \"First, we would like to thank the reviewers for their comments.\\n\\nThe main complaint was that the experiments were limited to toy problems. Since it is always hard to evaluate unsupervised learning algorithms (what is the metric of performance), the experiments were designed as a proof of concept. Hence, we agree with the reviewers and would love to see LCA tried and evaluated on real problems.\\n\\nFor the comment about the required modifications to avoid overfitting, there is truly only one parameter to set, i.e., the lambda parameter. All the others can easily be set to default values.\"}" ] }
OOuGtqpeK-cLI
Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
[ "Tommi Vatanen", "Tapani Raiko", "Harri Valpola", "Yann LeCun" ]
Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero.
[ "transformations", "outputs", "stochastic gradient", "methods", "backpropagation", "nonlinearities", "hidden neuron", "experiments", "perceptron network", "output" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=OOuGtqpeK-cLI
https://openreview.net/forum?id=OOuGtqpeK-cLI
ICLR.cc/2013/conference
2013
{ "note_id": [ "cAqVvWr0KLv0U", "og9azR3sTxoul", "Id_EI3kn5mX4i", "8PUQYHnMEx8CL" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362183240000, 1362399720000, 1362387060000, 1363039740000 ], "note_signatures": [ [ "anonymous reviewer 1567" ], [ "anonymous reviewer b670" ], [ "anonymous reviewer c3d4" ], [ "Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun" ] ], "structured_content_str": [ "{\"title\": \"review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities\", \"review\": \"In [10], the authors had previously proposed modifying the network\\nparametrization, in order to ensure zero-mean hidden unit activations across training examples (activity centering) and zero-mean derivatives (slope centering). This was achieved by introducing skip-connections between layers l-1 and l+1 and adding linear components to the non-linearity of layer l: these new parameters aren't learnt however, but instead are adjusted deterministically to enforce activity and slope centering. These ideas had initially been proposed by Schraudolph in earlier work, with [10] showing that these tricks significantly improved convergence of deep networks while also making the connection to second order methods.\\n\\nIn this work, the authors proposed adding an extra scaling parameter to the non-linearity, which is adjusted in order to make the digonal terms of the Hessian / Fisher Information matrix closer to unity. The authors study the effect of these 3 transformations by: \\n(1) measuring properties of the Hessian matrix with and without transformations, as well as angular distance of the resulting gradients to 2nd order gradients;\\n(2) comparing the overall classification convergence speed for a 2 and 3 layer MLPs on MNIST and finally;\\n(3) studying its effect on a deep auto-encoder.\\n\\nWhile I find this research direction particularly interesting, I find the \\noverlap between this paper and [10] to be rather troubling. While their analysis of slope / activity centering is new (and a more direct test of their \\nhypothesis), I feel that the case for these transformations had already been\\nmade in [10]. More importantly, evidence for the 3rd transformation is rather weak: it seems to slightly help convergence of 3-layer models and also helps in making the diagonal elements of the Hessian more unimodal. However, including gamma seem to rotate gradients *away* from 2nd order gradients. Also, their method did not seem to help in the deep auto-encoder setting: using gamma in the encoder network did not improve convergence speed, while using gamma in both encoders/decoders led to gamma either blowing-up or going to zero. While you would expect a diagonal approximation to a second-order method to help with the problem of dead-units, adding gamma did not seem to help in this respect. \\n\\nSimilarities between this paper and [10] are also evident in the writing itself. Large portions of Sections 1, 2 and 3 appear verbatim in [10]. This needs to be addressed prior to publication. The math of Section 3 could also be simplified by writing out gradients of log p (for each parameter \\theta) and then simply stating the general form of the FIM as E_eps[ dlogp/dtheta^T dlogp / dtheta]. As it stands Eqs. (12-17) are slightly inaccurate, as elements of the FIM should include an expectation over epsilon.\", \"summary\": \"I find the direction promising but the conclusion to be somewhat confusing / disappointing. The premise for gamma seemed well motivated and I expected more concrete evidence explaining the need for this transformation. Unfortunately, I am left wondering where things went wrong: some missing theoretical insight, wrong update rule on gamma or other ?\", \"other\": [\"Authors should consider using df/dx instead of the more ambiguous f' notation.\", \"Could the authors clarify what they mean by: 'transforming the model instead of the gradient makes it easier to generalize to other contexts such as variational Bayes ?' One downside I see to transforming the model instead of the gradients is that it obfuscates the link to second order methods and might thus hide useful insights.\", \"Section 4: 'algorith' -> algorithm\"]}", "{\"title\": \"review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities\", \"review\": \"This paper builds on previous work by the same authors that looks at performing dynamic reparameterizations of neural networks to improve training efficiency. The previously published approach is augmented with an additional parameter (gamma) which, although it is argued should help in theory, doesn't seem to in practice. Theoretical arguments for why the standard gradient computed under this reparameterization will be closer to a 2nd-order update are made, and experiments are conducted. While the theoretical arguments are pretty weak in my opinion (see detailed comments below), the experiments that looks at eigenvalues of the Hessian are somewhat more convincing, although they indicate that the originally published approach, without the gamma modification, is doing a better job.\", \"pros\": [\"reasonably well written\", \"experiments looking at eigenvalue distributions are interesting\"], \"cons\": \"- actual method is similar to authors' previous work in [10] and the older method of Schraudolph [12]\\n- the new modification doesn't seem to improve training efficiency, and even makes the eigenvalue distribution worse\\n- there seem to be problems with the theoretical analysis (maybe the authors can address this in their response?)\\n\\n\\n///// Detailed comments \\\\\\\\\\n\\nBecause it sounds similar to what you're doing, I think it would be helpful to give a slightly more detailed description of Schraudolph's 'gradient factor centering'. Does it correspond exactly to what you are doing in the case of neural nets? And if so, could you give an interesting example of how to apply your method to other models where Schraudolph's method would no longer apply? \\n\\nI don't understand what you mean by 'many competing paths' at the bottom of page 2. \\n\\nAnd when talking about 'linear dependencies' from x to y, what exactly do you mean? Do you mean the 1st-order components of the Taylor series of the true mapping or something else? Also, you might want to use affine when discussing functions that are linear + constant to be more technically precise.\\n\\nCan the arguments in section 3 be applied to network with more than 1 hidden layer?\\n\\nA concern I have with the analysis in section 3 is that, while assuming uncorrelated hidden unit outputs might be somewhat sensible (although I feel that our intuitions about how neural networks model certain mappings - such as 'representing different things' may be inaccurate), it seems less reasonable to assume that inputs (x) are uncorrelated with the outputs of the units, which seems to be needed to show that off-diagonal terms are zero (other than for eqn 12). You also seem to assume that certain 1st-derivatives of unit outputs are uncorrelated with various quantities (inputs, other unit outputs, and unit derivatives), which I don't think follows from the assumptions about the outputs of the units being uncorrelated with each other (but if this is indeed true, you should prove it or provide a reference). I think you should apply more rigor to these arguments for them to be convincing.\\n\\nI would recommend using an exact method to compute the Hessian. For example, you can compute it using n matrix-vector products, and tools for computing these automatically for any computational graph are widely available, as are particular formulae for neural networks. Such a method would be no more costly than what you are doing now, which involves n gradient computations.\\n\\nThe discussion surrounding equation 19 is an somewhat inaccurate and oversimplified account of the role that a constant like mu has in a second-order update rule like eqn. 19. This is a well studied and highly complex problem which doesn't really have to do with issues surrounding the inversion of the Hessian 'blowing up' so much as the problems of break-downs in model trust that occur when computing proposals based on local quadratic models of the objective. \\n\\nYour experiments seem to suggest that the eigenvalues are more even when you leave out the gamma parameter. How do you reconcile this with your theoretical analysis?\\n\\nWhy do you show a histogram of diagonal elements as opposed to eigenvalues in figure 2? I would argue that the concentration of the eigenvalues is a much better indicator of how close the Hessian matrix is to the identity (and hence how close the gradient is to being the same as a 2nd-order update) than what the diagonal entries look like. The diagonal entries of a highly non-diagonal matrix aren't particularly meaningful to look at.\\n\\nAlso, since your analysis was done using the Fisher, why not examine this matrix instead of the Hessian in your experiments?\"}", "{\"title\": \"review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities\", \"review\": \"* A brief summary of the paper's contributions, in the context of prior work.\\n\\nThis paper extends the authors' previous work on making sure that the hidden units in a neural net have zero output and slope on average, by also using direct connections that model explicitly the linear dependencies. The extension introduces another transformation which changes the scale of the outputs of the hidden units: essentially, they try to normalize both the scale and the slope of the outputs to one. This is done (essentially) by introducing a regularization parameter that encourages the geometric mean of the scale and the slope to be one.\\n\\nThe paper's contributions are also to give a theoretical analysis of the effect of the proposed transformations. The already proposed tricks are shown to make the non-diagonal elements of the Fisher information matrix closer to zero. The new transformation makes the diagonal elements closer to each other in scale, which is interesting as it's similar to what natural gradient does.\\n\\nThe authors also provide an empirical analysis of how the proposed method is close to what a second-order method would do (albeit on a small neural net). The experiment with the angle between the gradient and the second-order update is quite nice (I think such an experiment should be part of any paper that proposes new optimization tricks for training neural nets).\\n\\n* An assessment of novelty and quality.\\n\\nGenerally, this is a well-written and clear paper that extends naturally the authors' previous work. I think that the analysis is interesting and quite readable. I don't think that these particular transformations have been considered before in the literature and I like that they are not simply fixed transformations of the data, but something which integrates naturally into the learning algorithm.\\n\\n* A list of pros and cons (reasons to accept/reject).\\n\\nThe proposed scaling transformation makes sense in theory, but I'm not sure I agree with the authors' statement (end of Section 5) that the method's complexity is 'minimal regularization' compared to dropouts (maybe in theory, but honestly implementing dropout in a neural net learning system is considerably easier). The paper also doesn't show significant improvements (beyond analytical ones) over the previous transformations; based on the empirical results only I wouldn't necessarily use the scaling transformation.\"}", "{\"review\": \"First of all we would like to thank you for your informed, thorough and kind comments. We realize that there is major overlap with our previous paper [10]. We hope that these two papers could be combined in a journal paper later on. It was mentioned that we use some text verbatim from [10]. There is some basic methodology which is necessary to explain before going to deeper explanations and we felt that it is not a big violation to use our own text. However, we have now modified the sections in question with your comments and proposals in mind. If you feel that it is necessary to check every sentence for verbatim, please consider conditional acceptance with this condition.\\n\\nWe agree that the evidence supporting the use of the third transformation is rather weak. We have tried to report our findings as honestly as possible and also express our doubts in the paper (see, e.g., end of Section 4).\\n\\nTo reviewer 'Anonymous 1567':\\n\\nYou argue that Eqs. (12-17) are slightly accurate. However, we have computed the expectation over epsilon with pen and paper and epsilon does vanish from the Eqs. Thus, the Eqs. in question are exact. Would you think that we should still write down the gradients explicitly?\\n\\nWe had considered using the df/dx notation, but decided to use the f' notation, since the derivative is taken with respect to Bx and using the df/dx notation would require us to define a new variable u = Bx and denote df/du. We think, this would further clutter the equations. Would you think this is acceptable?\\n\\nWe tried to clarify the meaning of 'transforming the model instead of the gradient ...' in Discussion.\\n\\nTo reviewer 'Anonymous b670':\\n\\nWe have now explained the relationship to Schraudolph's method in more detail. We provide an example and refer to Discussion of [10].\\n\\nWhen writing about 'many competing paths' and 'linear dependencies', we have added explanations with equations in the updated version.\\n\\nThe question, whether the arguments in Section 3 can be applied to networks with more than one hidden layer: We have presented the theory with this simplified case in order to convey the understanding to the reader. We assume that the idea could be formulated in the general (deep) case, but writing it out would substantially complicate the equations. Our experimental results support this assumption.\\n\\nAbout the uncorrelatedness assumption, we have added the following explanation: 'Naturally, it is unrealistic to assume that inputs $x_t$, nonlinear activations $f(cdot)$, and their slopes $f^prime(cdot)$ are all uncorrelated, so the goodness of this approximation is empirically evaluated in the next section.'\\n\\nWe do realize that it is possible and more elegant to compute exact solution for the Hessian matrix. However, as being more error prone, it would require careful checking by, e.g., some approximative method. As the mere approximation suits our needs well, we refrained from doing the extra work for the exact solution. We have also acknowledged this in the paper.\\n\\nRegarding mu in Eq. 19: Thanks for this remark. We have reformulated the text surrounding Eq. 19. Could you kindly provide further suggestions and/or references if you still find it unsatisfactory.\", \"experiments_on_eigenvalue_distribution\": \"Fig. 1(a) suggest that there is no clear difference between the eigenvalue distributions with our without gamma (the vertical position of the plot is irrelevant since it corresponds to choosing a different learning rate).\\n\\nWe show histogram of diagonal elements in order to distinguish between weights. For instance, the colors in Fig.2 could not have been used otherwise.\\n\\nFisher Information vs. Hessian matrix: This is a relevant point for the future work. The Hessian describes the curvature of the actual optimization problem. We chose Fisher information matrix in the theoretical part simply because it has more compact equations. As we note in the paper, 'the hessian matrix is closely related to the Fisher information matrix, but it does depend on the output data and contains more term'. We argue that the terms present in Fisher information matrix will make our point clear and adding the other terms included in the Hessian would just be additional clutter.\\n\\nTommi Vatanen and Tapani Raiko\"}" ] }
UUwuUaQ5qRyWn
When Does a Mixture of Products Contain a Product of Mixtures?
[ "Guido F. Montufar", "Jason Morton" ]
We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements.
[ "mixtures", "products", "mixture", "product", "product distributions", "restricted boltzmann machines", "results", "relative representational power", "pairs", "tools" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=UUwuUaQ5qRyWn
https://openreview.net/forum?id=UUwuUaQ5qRyWn
ICLR.cc/2013/conference
2013
{ "note_id": [ "boGLoNdiUmbgV", "dPNqPnWus1JhM", "vvzH6kFyntmsR", "dYGvTnylo5TlF", "FdwnFIZNOxF5S" ], "note_type": [ "review", "review", "comment", "review", "review" ], "note_created": [ 1362582360000, 1362219240000, 1364258160000, 1361559180000, 1363384620000 ], "note_signatures": [ [ "anonymous reviewer 51ff" ], [ "anonymous reviewer 6c04" ], [ "anonymous reviewer 6c04" ], [ "anonymous reviewer 91ea" ], [ "Guido F. Montufar, Jason Morton" ] ], "structured_content_str": [ "{\"title\": \"review of When Does a Mixture of Products Contain a Product of Mixtures?\", \"review\": \"This paper attempts at comparing mixture of factorial distributions (called product distributions) to RBMs. It does so by analyzing several theoretical properties, such as the smallest models which can represent any distribution with a given number of strong modes (or at least one of these distributions) or the smallest mixture which can represent all the distributions of a given RBM.\\n\\nThe relationship between RBMs and other models using hidden states is not fully understood and any clarification is welcome. Unfortunately, not only I am not sure the MoP is the most interesting class of models to analyze, but the theorems focus on extremely specific properties which severely limits their usefulness:\\n- the definition of strong modes makes the proofs easier but it is hard to understand how they relate to 'interesting' distributions. I understand this is a very vague notion but I would have appreciated hints about how the distributions we care about tend to have a high number of strong modes.\\n- the fact that there are exponentially many inference regions for an RBM whereas there are only a linear number of them for a MoP seems quite obvious, merely by counting the number of hidden states configurations. I understand this is far from a proof but this is to me more representative of the fact that one does not want to use the hidden states as a new representation for a MoP, which we already knew.\\n\\nAdditionnally, the paper is very heavy on definitions and gives very little intuition about the meaning of the results. Theorem 29 is a prime example as it takes a very long time to parse the result and I could really have used some intuition about the meaning of the result. This feeling is reinforced by the length of the paper (18 when the guidelines mentioned 9) and the inclusion of propositions which seem anecdotal (Prop.7, section 2.1, Corollary 18).\\n\\nIn conclusion, this paper tackles a problem which seems to be too contrived to be of general interest. Further, it is written in an unfriendly way which makes it more appropriate to a very technical crowd.\", \"minor_comments\": [\"Definition 2, you have that C is included in {0, 1}^n. That makes C a vector, not a set.\", \"Proposition 8: I think that G_3 should be G_4.\"]}", "{\"title\": \"review of When Does a Mixture of Products Contain a Product of Mixtures?\", \"review\": \"This paper compares the representational power of Restricted Boltzmann Machines\\n(RBMs) with that of mixtures of product distributions. The main result is that\\nRBMs can be exponentially more efficient (in terms of the number of parameters\\nrequired) to represent some classes of probability distributions. This provides\\ntheoretical justifications to the intuition behind the motivation for\\ndistributed representations, i.e. that the combinations of an RBN's hidden\\nunits can give rise to highly varying distributions, with a number of modes\\nexponential in the model's size.\\n\\nThis paper is very dense, and unfortunately I had to fast-forward through it in\\norder to be able to submit my review in time. Although most of the derivations\\ndo not appear to be that complex, they build on existing results and concepts\\nthat the typical machine learning crowd is typically unfamiliar with. As a\\nresult, one can be quickly overwhelmed by the amount of new material to digest,\\nand going through all steps of all proofs can take a long time.\\n\\nI believe the results are interesting since they provide a theoretical\\nfoundation to ideas that have been motivating the use of distributed\\nrepresentations. As a result, I think they are quite relevant to current\\nresearch on learning representations, even if the practical insights seem\\nlimited.\\n\\nThe maths appear to be solid, although I definitely did not check them in\\ndepth. I appreciate the many references to previous work.\\n\\nOverall, I think this paper deserves being published, although I wish it was\\nmade more accessible to the general machine learning audience, since in its\\ncurrent state it takes a lot of motivation to go through it. Providing\\nadditional discussion throughout the whole paper on the motivations and\\ninsights behind these many theoretical results, instead of mostly limiting them\\nto the introduction and discussion, would help the understanding and make the\\npaper more enjoyable to read.\", \"pros\": \"relevant theoretical results, (apparently) solid maths building on previous work\", \"cons\": \"requires significant effort to read in depth, little practical use\", \"things_i_did_not_understand\": [\"Fig. 1 (as a whole)\", \"Last paragraph of 1.1: why is this interesting?\", \"Fig. 5 (not clear why it is in some kind of pseudo-3D and what is the meaning\", \"of all these lines -- also some explanations come after it is referenced, which\", \"does not help)\", \"'(...) and therefore it contains distributions with (...)': I may be missing\", \"something obvious, but I did not follow the logical link ('therefore')\", \"I am unable to parse Remark 22, not sure if there is a typo (double 'iff') or\", \"I am just not getting it.\"], \"typos_or_minor_points\": [\"It seems like Fig. 3 and 4 should be swapped to match the order in which they\", \"appear in the text\", \"'Figure 3 shows an example of the partitions (...) defined by the models\", \"M_2,4 and RBM_2,3' -> mention also 'for some specific parameter values' to be\", \"really clear\", \"Deptartment (x2)\", \"Lebsegue\", \"I believe the notation H_n is not explicitly defined (although it can be\", \"inferred from the definition of G_n)\", \"There is a missing reference with a '?' on p. 9 after 'm <= 9'\", \"It seems to me that section 6 is also related to the title of section 5.\", \"Should it be a subsection?\", \"'The product of mixtures represented by RBMs are (...)': products\", \"'Mixture model (...) generate': models\"]}", "{\"reply\": \"Thanks for the updated version, I've re-read it quickly and it's indeed a bit clearer!\"}", "{\"title\": \"review of When Does a Mixture of Products Contain a Product of Mixtures?\", \"review\": \"The paper analyses the representational capacity of RBM's, contrasting it with other simple models.\\n\\nI think the results are new but I'm definitely not an expert on this field. They are likely to be interesting for people working on RBM's, and thus to people at ICLR.\"}", "{\"review\": \"We thank all three reviewers for the helpful comments, which enabled us to improve the paper. We have uploaded a revision to the arxiv taking into account the comments, and respond to some specific concerns below.\\n\\nWe were unsure as to whether we should make the paper longer by providing more in-line intuition around the steps of the proof of our main results. This would address the concerns of Reviewers 6c04 and 51ff, who thought some additional intuition throughout would be helpful, while Reviewer 51ff felt that the paper was perhaps too long as it was. We elected to balance these concerns by making significant changes to improve clarity without greatly expanding the exposition, making a net addition of about a page of text. However, by moving some material to the appendix, the main portion of the paper has been reduced in length to 14 pages.\", \"responding_to_specific_comments\": \"\", \"reviewer_6c04\": \">> Things I did not understand: \\n>>- Fig. 1 (as a whole) \\n\\nWe have reworked this figure and improved the explanation in the caption; the intensity of the shading represents the value of log(k), that is the function $f(m,n) = min { log(k): mathcal{M}_{n,k}$ contains RBM_{n,m} }$. \\n\\n>>- Last paragraph of 1.1: why is this interesting? \\n\\nSince we are arguing that the sets of probability distributions representable by RBMs and MoPs are quite different, we thought it would be interesting to mention what is known about when these two sets do intersect. We have added a comment about this.\\n\\n>>- Fig. 5 (not clear why it is in some kind of pseudo-3D and what is the meaning of all these lines -- also some explanations come after it is referenced, which does not help) \\n\\nWe have reworked the figure and added additional explanation in the text where the figure is referenced. This is a picture of the interior of a 3-dimensional simplex (a tetrahedron with vertices corresponding to the outcomes (0,0), (0,1), (1,0), (1,1)), with three sets of probability distributions depicted. The curved set is a 2-dimensional surface. The regions at the top and bottom are polyhedra, and the lines in the original figure were the edges of these polyhedra (the edges in back have now been removed to make the rendering clearer). Additionally, we linked to an interactive 3-d graphic object of Fig. 5. Using Adobe Acrobat Reader 7 (or higher) the reader can rotate and slice this object in 3-d. \\n\\n>>- '(...) and therefore it contains distributions with (...)': I may be missing something obvious, but I did not follow the logical link ('therefore') \\n\\nWe expanded and rephrased this to hopefully be more clear.\\n\\n>>- I am unable to parse Remark 22, not sure if there is a typo (double 'iff') or I am just not getting it. \\n\\nWe rewrote this remark, sorry for the confusion. The meaning was that the three statements (X iff Y iff Z) are equivalent.\\n\\n>>Typos or minor points: \\n>> - It seems like Fig. 3 and 4 should be swapped to match the order in which they appear in the text \\n>>- 'Figure 3 shows an example of the partitions (...) defined by the models M_2,4 and RBM_2,3' -> mention also 'for some specific parameter values' to be really clear \\n>>- Deptartment (x2) \\n>>- Lebsegue \\n>>- I believe the notation H_n is not explicitly defined (although it can be inferred from the definition of G_n) \\n>>- There is a missing reference with a '?' on p. 9 after 'm <= 9' \\n>>- It seems to me that section 6 is also related to the title of section 5. Should it be a subsection? \\n>>- 'The product of mixtures represented by RBMs are (...)': products \\n>>- 'Mixture model (...) generate': models\\n\\nThank you, we fixed these.\", \"reviewer_51ff\": \">>In conclusion, this paper tackles a problem which seems to be too contrived to be of general interest. Further, it is written in an unfriendly way which makes it more appropriate to a very technical crowd. \\n>>- the fact that there are exponentially many inference regions for an RBM whereas there are only a linear number of them for a MoP seems quite obvious, merely by counting the number of hidden states configurations. I understand this is far from a proof but this is to me more representative of the fact that one does not want to use the hidden states as a new representation for a MoP, which we already knew.\\n\\nIn part this is simply a difference of philosophy. Some place greater emphasis on an intuition or demonstration on a dataset, while others prefer to see a proof. We recognize we may not have a lot to offer those comfortable relying upon their intuitive or empirical grasp of the situation, and instead aim to provide some mathematical proof to back up that intuition and satisfy the second group.\\n\\nIn trying to show that one class of models (RBMs or distributed representations) is better than another (here, non-distributed representations or naive Bayes models) at representing complex distributions, one must make a choice of criteria for comparison. One can pick, inevitably arbitrarily, a dataset for comparison and produce an empirical comparison. To provide a proof or theoretical comparison, one must choose a metric of complexity. Of course, we always want larger and more natural datasets and broader metrics, but one must start somewhere. We felt that in measuring the complexity of a distribution, the bumpiness of a probability distribution, or number of local maxima, modes, or strong modes in the Hamming topology was a reasonable place to start. While we examined other metrics of distribution complexity, this was one that provided enough leverage to distinguish the models. In the Discussion section, we talk about why multi-information, for example, is not suitable for making this distinction. Making such a choice of metric is the unfortunate price of theoretical justifications.\\n\\nAdditionally, the number of inference regions was not claimed to be new, but part of the exposition about the widespread intuition regarding distributed representations. We have added some exposition to clarify this.\", \"why_we_chose_mop\": \"we wanted to compare distributed representations with non-distributed representations. Since we are interested in learning representations, these should be two models with hidden variables that hold the representation. For a non-distributed model with hidden variables and the same observables as an RBM, the na'ive Bayes or MoP model is canonical. For example, a k-way interaction model might also be a good comparison, but it lacks hidden nodes.\\n\\n\\n\\n>>Additionnally, the paper is very heavy on definitions and gives very little intuition about the meaning of the results. Theorem 29 is a prime example as it takes a very long time to parse the result and I could really have used some intuition about the meaning of the result. This feeling is reinforced by the length of the paper (18 when the guidelines mentioned 9) and the inclusion of propositions which seem anecdotal (Prop.7, section 2.1, Corollary 18).\\n\\nSorry for the confusion. The introduction, as well as Figure 1 is devoted to explaining and interpreting Theorem 29. The statements therein such as 'We find that the number of parameters of the smallest MoP model containing an RBM model grows exponentially in the number of parameters of the RBM for any fixed ratio $0!<!m/n!<!infty$, see Figure 1' are hopefully more-intuitive corollaries of Theorem 29. The structure of the paper is to try to put the intuitive explanation of the results first, then give the (necessarily technical) proof showing how the results were obtained. We have added a pointer before Theorem 29 to indicate this.\\n\\nIn the revision we added explanations providing additional intuition as to why we are making certain definitions, and a road map of how the main results are proved.\\n\\n>>Minor comments: \\n>> - Definition 2, you have that C is included in {0, 1}^n. That makes C a vector, not a set. \\n\\nNo, a subset as we write $mathcal{C} subset mathcal{X}$ of the set of (binary) strings $mathcal{X}$ of length n is again a set of (binary) strings. One could of course interpret it in terms of a vector of indicator functions, but this is not the approach needed here.\\n\\n>> - Proposition 8: I think that G_3 should be G_4.\\n\\nSorry for the confusion. Again this is correct as is; G_4 would refer to binary strings of length 4, while the Proposition concerns strings of length 3.\"}" ] }
aJh-lFL2dFJ21
Discriminative Recurrent Sparse Auto-Encoders
[ "Jason Rolfe", "Yann LeCun" ]
We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters. From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST.
[ "discriminative recurrent sparse", "network", "hidden layer", "input", "number", "time", "hidden units", "model", "encoder", "recurrent" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=aJh-lFL2dFJ21
https://openreview.net/forum?id=aJh-lFL2dFJ21
ICLR.cc/2013/conference
2013
{ "note_id": [ "TTDqPocbXWPbU", "10n94yAXr20pD", "NNXtqijEtiN98", "vCQPfwXgPoCu7", "__De_0xQMv_R3", "uc38pbD6RhB1Z", "6FfM6SG2MKt8r", "zzUEFMPkQcqkJ", "Sih8ijosvDuO_", "4V-Ozm5k8mVcn", "SKcvK2UDvgKxL", "-uMO-UhKgU-Z_", "UEx3pAOcLlpPT", "KVmXTReW18TyN", "5Br_BDba_D57X", "PZqMVyiGDoPcE", "yy9FyB6XUYyiJ" ], "note_type": [ "review", "comment", "review", "review", "review", "review", "comment", "review", "comment", "review", "review", "review", "review", "comment", "comment", "review", "review" ], "note_created": [ 1364548920000, 1363534380000, 1363222920000, 1364571960000, 1361907180000, 1363316520000, 1367028540000, 1362400920000, 1363817880000, 1363400280000, 1362177060000, 1368275760000, 1363223340000, 1363664400000, 1363395000000, 1363734420000, 1362604500000 ], "note_signatures": [ [ "Richard Socher" ], [ "anonymous reviewer bc93" ], [ "Jason Rolfe" ], [ "Yann LeCun" ], [ "Yoshua Bengio" ], [ "anonymous reviewer bc93" ], [ "Jason Rolfe" ], [ "anonymous reviewer a32e" ], [ "Jason Tyler Rolfe, Yann LeCun" ], [ "anonymous reviewer dd6a" ], [ "anonymous reviewer 8ddb" ], [ "Richard Socher" ], [ "Jason Rolfe" ], [ "Jason Tyler Rolfe, Yann LeCun" ], [ "Jason Tyler Rolfe, Yann LeCun" ], [ "Andrew Maas" ], [ "Jürgen Schmidhuber" ] ], "structured_content_str": [ "{\"review\": \"Hi,\\n\\nThis looks a whole lot like the semi-supervised recursive autoencoder that we introduced at EMNLP 2011 [1] and the unfolding recursive autoencoder that we introduced at NIPS 2011.\\n\\nThese models also have a reconstruction + cross entropy error at every iteration and hence do not suffer from the vanishing gradient problem.\\n\\nThe main (only?) differences are the usage of a rectified linear unit instead of tanh and restricting yourself to have a chain structure which is just a special case of a tree structure.\\n\\n[1] http://www.socher.org/index.php/Main/Semi-SupervisedRecursiveAutoencodersForPredictingSentimentDistributions\"}", "{\"reply\": \"It's true that any deep NN can be represented by a large recurrent net, but that's not the point I was making. The sentence I commented on gives the impression that a recurrent network has the same representational power as any deep network 'while substantially reducing the number of trainable parameters'. If you construct an RNN the way you described in your answer to my remark, you don't reduce the number of trainable parameters at all.\\n\\nPut differently, the impression that this particular sentence gives, is that you can simply take a recurrent net, iterate it 5 times, and you would have the same representational power as any 5-layer deep NN (with the same number of nodes in each layer as the RNN), but with only one 5-th of the trainable parameters. This is, as I'm sure you'll agree, simply not true. \\n\\nRemember, my remark is only concerned with the precise wording of the message you wish to convey. I do agree that iterating the network gives you more representational power for a fixed number of trainable parameters (that is more or less what you have shown in your paper), just not that it gives you as much representational power as in the case where the recurrent weights can be different each iteration (which is what happens in an equivalent deep NN).\"}", "{\"review\": \"We are very thankful to all the reviewers and commenters for their constructive comments.\\n\\n* Anonymous 8ddb:\\n\\n1. Indeed, the architecture of DrSAE is similar to a deep sparse rectifier neural network (Glorot, Bordes, and Bengio, 2011) with tied weights (Bengio, Boulanger-Lewandowski and Pascanu, 2012). In addition to the loss functions used, DrSAE differs from deep sparse rectifier neural networks with tied weights in that the input projects to all layers. We note this connection in the next-to-last paragraph of Section 1, and have added the reference to the citation you suggest.\\n\\nIt is true that part-units are strongly connected to the inputs while categorical-units are more strongly connected to part units than to the inputs. The categorical-units seem to act like units in the top layers of a multilayer network.\\n\\n2(a). The input is indeed fed into all layers. We have added an explicit mention of this in the third paragraph of section 1, and in the first paragraph of section 2.\\n\\n2(b). We removed the statement suggesting that DrSAE is less subject to the vanishing gradient problem in the introduction, because we have little hard evidence for it in the paper. \\n\\nHowever, the intuition behind the statement is somewhat opposite to Yoshua Bengio's argument: the overall 'gain' of the recurrent encoder network (without input provided to each layer) must be around 1, simply because it is trained to reconstruct the input through a linear decoder whose columns have norm equal 1. The unit activities can neither explode nor vanish over the recurrent steps because of that. Since the overall recurrent encoder has gain 1, each of the (identical) layers must have gain 1 too. Because of the reconstruction criterion, each recurrent step must also be approximately invertible (otherwise information would be lost, and reconstruction would be impossible). It is our intuition that in a sequence of invertible layers whose gain in 1, there is little vanishing gradient issues and little gradient 'diffusion' issue (the informal notion of gain can be made precise with in terms of eigenvalues of the Jacobian).\\n\\nWe do observed that as training of a DrSAE progresses, the magnitude of the gradient tends to equalize between all layers. But this will be the subject of future investigations.\\n\\n3. The column-wise bounds on the norms of the matrices are enforced through projection on the unit sphere (i.e., column-wise scaling) after each SGD step. We have added explicit mention of this in footnote 2.\\n\\n5. Units still differentiate into part-units and categorical-units with only two temporal steps, but the prototypes are not as clean. We have added a mention of this to the end of section 4. Further investigation of the effect of the choice of the encoder on the differentiation into categorical-units and part-units will be the subject of future work.\\n\\n\\n* Testing on other datasets than MNIST (Anonymous 8ddb and Anonymous a32e):\\n\\nYes, results other datasets like CIfAR would be ideal, but this will require a convolutional (or locally-connected) version of the method, since almost all architectures that yield good results on natural image datasets are of that type. We are currently working on a convolutional extension to DrSAE, which we are applying to classification of natural image datasets. But we believe that the architecture, algorithm, and results are interesting enough to be brought to the attention of the community before results on natural images become available.\\n\\nThat said, in preliminary testing using fully-connected DrSAE, we've obtained results superior to the deep sparse rectifier neural networks of Glorot, Bordes & Bengio (2011) on CIFAR-10; specifically, 48.19% error rate using only 200 hidden units per layer, versus their error rate 49.52% using 1000 hidden units per layer. Since Glorot et al. use a similar architecture (as discussed in point 1), this suggests that the differentiation into part-units and categorical-units improves classification performance on natural images.\\n\\n\\n* Anonymous a32e:\\n\\n1. The architecture of the network is captured by equation 2 and figure 1. The loss function is specified in equations 1 and 4. The review of prior work and discussion of its relation to our network necessarily assumes familiarity with the prior work, since there is only space for a cursory summary of the published ideas upon which we draw. However, we would hope that the main analysis in the paper, in sections 3, 4, and 5, are understandable even without intimate familiarity with LISTA and the like.\\n\\n2. The natural way to avoid manually chosen constants is to do an automatic search of hyperparameter space, maximizing the performance on a validation set. We hope to perform this search in the near future, as it will likely improve classification performance. As it stands, our ad-hoc parameters effectively offer a lower bound on the performance obtainable with a more rigorous search of hyperparameter space.\\n\\n4. There are two kinds of 'fairness' in comparing results: 1. keep the computational complexity constant; 2. keep the number of parameters constant. The comparison between 2 and 11 time steps is intended to keep the number of parameters constant (though it does increase the computational complexity). It is unclear how one could hold both the number of parameters and the computational load constant within the DrSAE framework.\\n\\n5. A more systematic exploration of encoder depth should certainly be undertaken as part of a complete search of the hyperparameter space.\\n\\n\\n* Yoshua Bengio:\\n\\n1. We are presently exploring the cause of the differentiation into part-units and categorical units. In particular, we've now succeeded in inducing the differentiation using an unsupervised criterion derived from the discriminative loss of DrSAE. The interaction between our logistic loss function and the autoencoding framework thus seems to constitute the crucial ingredient beyond what is present in similar networks like your deep sparse rectifier neural networks. This work is ongoing, but we look forward to reporting this result soon. It would be interesting to explore the degree to which the rectified-linear activation function is necessary for the differentiation into part- and categorical-units. Our intuition, based upon experience with this unsupervised regularizer, as well as the fact that units differentiate even in a two-hidden-layer DrSAE, is that this activation function is not essential.\\n\\n2. Please see point 2(b) in response to reviewer Anonymous 8ddb.\\n\\n3 & 4. Thank you for the references. They have been included in the paper. We think it is worth noting, though, that dropout, tangent propagation, and iterative pretraining and stacking of networks (as in deep convex networks) are regularizations or augmentations of the training procedure that may be applicable to a wide class of network architectures, including DrSAE.\"}", "{\"review\": \"Minor side comment: IN GENERAL, having a cost term at each iteration (time step of the unfolded network) does not eliminate the vanishing gradient problem!!!\\n\\nThe short-term dependencies can now be learned through the gradient on the cost on the early iterations, but the long-term effects may still be improperly learned. Now it may be that one is lucky (and that could apply in your setting) and that the weights that are appropriate for going from the state at t to a small cost at t+delta with small delta are also appropriate for minimizing the longer term costs for large delta. \\n\\nThere are good examples of that in the literature. A toy example is the recurrent network that learns the parity of a sequence. Because of the recursive nature of the solution, if you do a very good job at predicting the parity for short sequences, there is a good chance that the solution will generalize properly to much longer sequences. Hence a curriculum that starts with short sequences and gradually extends to longer ones is able to solve the problem, where only training from long ones without intermediate targets at every time step completely fails.\"}", "{\"review\": \"Thank you for this interesting contribution. The differentiation of hidden units into class units and parts units is fascinating and connects with what I consider a central objective for deep learning, i.e., learning representations where the learned features disentangle the underlying factors of variation (as I have written many times in the past, e.g., Bengio, Courville & Vincent 2012). Why do you think this differentiation is happening? What are the crucial ingredients of your setup that are necessary to observe that effect?\", \"i_have_a_remark_regarding_this_sentence_on_the_first_page\": \"'Recurrence opens the possibility of sharing parameters between successive layers of a deep network, potentially mitigating the vanishing gradient problem'. My intuition is that the vanishing/exploding gradient problem is actually *worse* with recurrent nets than with regular (unconstrained) deep nets. One way to visualize this is to think of (a) multiplying the same number with itself k times, vs (b) multiplying a k random numbers. Clearly, (a) will explode or vanish faster because in (b) there will be some 'cancellations'. Recurrent nets correspond to (a) because the weights are the same at all time steps (but yes, the non-linearities derivative will be different), whereas unconstrained deep nets correspond to (b) because the weight matrices are different at each layer.\", \"minor_point_about_prior_work\": \"in the very old days I worked on using recurrent nets trained by BPTT to iteratively reconstruct missing inputs and produce discriminatively trained outputs. It worked quite well. NIPS'95, Recurrent Neural Networks for Missing or Asynchronous Data.\\n\\nRegarding the results on MNIST, among the networks without convolution and transformations, one should add the Manifold Tangent Classifier (0.81% error), which uses unsupervised pre-training, the Maxout Networks with dropout (0.94%, no unsupervised pre-training), DBMs with dropout (0.79%, with unsupervised pre-training), and the deep convex networks (Yu & Deng, 0.83% also with unsupervised learning).\"}", "{\"title\": \"review of Discriminative Recurrent Sparse Auto-Encoders\", \"review\": \"SUMMARY:\\n\\nThe authors describe a discriminative recurrent sparse auto-encoder, which is essentially a recurrent neural network with a fixed input and linear rectifier units. The auto-encoder is initially trained to reproduce digits of MNIST, while enforcing a sparse representation. In a later phase it is trained in a discriminative (supervised) fashion to perform classification. \\nThe authors discuss their observations. Most prominently they describe the occurrence of two types of nodes: part-units, and categorical units The first are units that encode low-level features such as pen-strokes, whereas the second encode specific digits within the MNIST set. It is shown that before the discriminative training, the image reconstruction happens mostly by combining pen-strokes, whereas after the discriminative training, image reproduction happens mainly by the combination of a prototype digit of the corresponding class, which is subsequently transformed by adding pen-stroke-like features. The authors state that this observation is consistent with the underlying hypothesis of auto-encoders that the data lies on low-dimensional manifolds, and the auto-encoder learns to split the representation of a digit into a categoric prototype and a set of transformations.\\n\\nGENERAL OPINION\\n\\nThe paper and the suggested network architecture is interesting and, as far as I know, quite original. It is also compelling to see the unique ways in which the unsupervised and supervised training contribute to the image reconstruction. Overall I believe this paper is a suited contribution to this conference. I have some questions and remarks that I will list here. \\n\\nQUESTIONS \\n\\n- From figure 5 I get the impression that the states dynamics are convergent; for sufficiently large T, the internal state of the nodes (z) will no longer change. This begs the question: is the ideal situation that where T goes to infinity? If so, could you consider the following scenario: We somehow compute the fixed, final state $z(infty)$ (maybe this can be performed faster than by simply iterating the system). Once we have it, we can perform backpropagation-through-time on a sequence where each step in time, the states are identical (the fixed-point state). This would be an interesting scenario, as you might be able to greatly accelerate the training process (all Jacobians are identical, error backpropagation has an analytical solution), and you explicitly train the system to perform well on this fixed point, transient effects are no longer important.\\nPerhaps I'm missing some crucial detail here, but it seems like an interesting scenario to discuss. \\n\\n- On a related note: what happens if - after training - the output (image reconstruction and classification) is constructed using the state from a later/earlier point in time? How would performance degrade as a function of time?\\n\\nREMARKS \\n\\n- In both the abstract and the introduction the following sentence appears: 'The depth implicit in the temporally-unrolled form allows the system to exhibit all the power of deep networks, while substantially reducing the number of trainable parameters'. I believe this is an dangerous statement, as tied weights will also impose a severe restriction on representational power (so they will not have 'all the power of deep networks'). I would agree with a rephrasing of this sentence that says something along the lines of: 'The depth implicit in the temporally-unrolled form allows the system to exhibit far more representational power, while keeping the number of trainable parameters fixed'. \\n\\n- I agree with the Yoshua's remark on the vanishing gradient problem. Tied weights cause every change in parameter space to be exponentially amplified/dampened (save for nonlinear effects), making convergence harder. The authors should probably rewrite this sentence.\\n\\n- I deduce from the text that the system is only trained to provide output (image reconstruction and classification) at the T-th iteration. As such, the backpropagated error only is 'injected' at this point in time. This is distinctly different form the 'common' BPTT setup, where error is injected at each time step, and the authors should maybe explicitly mention this. Apparently reviewer 'Anonymous 8ddb' has interpreted the model as if it was to provide output at each time step ('the reconstruction cost found at each step which provide additional error signal'), so definitely make this more clear.\\n\\n- The authors mention that they trained the DrSAE with T=11, so 11 iterations. I suspect this number emerges from a balance between computational cost and the need for a sufficient amount of iterations? Please explicitly state this in your paper.\\n\\n- As a general remark, the comparison to ISTA and LISTA is interesting, but the authors go to great lengths to finding detailed analogies, which might not be that informative. I am not sure whether the other reviewers would agree with me, but maybe the distinction between categorical and part-units can be deduced without this complicated and not easy-to-understand analysis. It took me some time to figure out the content of paragraphs 3.1 and 3.2. \\n\\n- I also agree with other reviewers that it is unfortunate that only MNIST has been considered. Results on more datasets, and especially other kinds of data (audio, symbolic?) might be quite informative\"}", "{\"reply\": \"Thank you very much for your constructive comments.\\n\\nThere are indeed similarities between discriminative recurrent auto-encoders and the semi-supervised recursive autoencoders of Socher, Pennington, Huang, Ng, & Manning (2011a); we will add the appropriate citation to the paper. However, the networks of Socher et al. (2011a) are very similar to RAAMs (Pollack, 1990), but with a dynamic, greedy recombination structure and a discriminative loss function. As a result, they differ from DrSAE as outlined in our response to Jurgen Schmidhuber. Like the work of Socher et al. (2011a), DrSAE is based on an recursive autoencoder that receives input on each iteration, with the top layer subject to a discriminative loss. However, Socher et al. (2011a), like Pollack (1990), iteratively adds new information on each iteration, and then reconstructs both the new information and the previous hidden state from the resulting hidden state (Socher, Huang, Pennington, Ng, & Manning, 2011 reconstructs the entire history of inputs). The discriminative loss function is also applied at every iteration. In contrast, the input to DrSAE is the same on each iteration, and only the reconstruction and classification based upon the final state is optimized. The entire recursive LISTA stack constitutes a single encoder, which is decoded in a single (linear) step. Whereas Socher et al. (2011a) performs discriminative compression of a variable-length, structured input using a zero-hidden-layer encoder, our goal is static autoencoding using a deep (recursive) encoder. \\n\\nMoreover, the main contribution of our paper is the demonstration of a novel and interesting hidden representation (based upon prototypes and their deformations along the data manifold), along with a network that naturally learns this representation. The hierarchical refinement of categorical-units from part-units that we observe seems unlikely to evolve in the networks of Socher et al. (2011a), since the activity of the part-units cannot be maintained across iterations by continuous input. The KL-divergence used for discriminative training in Socher et al. (2011a) is only identical to the logistic loss if the target distributions have no uncertainty (i.e., they are one-hot). Our ongoing work suggests that this difference is likely to be important for the differentiation of categorical-units and part-units.\"}", "{\"title\": \"review of Discriminative Recurrent Sparse Auto-Encoders\", \"review\": [\"Authors propose an interesting idea to use deep neural networks with tied weights (recurrent architecture) for image classification. However, I am not familiar enough with the prior work to judge novelty of the idea.\", \"On the critical note, the paper is not easy to read without good knowledge of prior work, and is pretty long. I would recommend authors to consider following to make their paper more accessible:\", \"the description should be shorter, simpler and self-contained\", \"try to avoid the ad-hoc constants everywhere\", \"run experiments on something larger and more difficult than MNIST - current experiments are not convincing to me; together with many hand-tuned constants, I would be afraid that this model might not work at all on more realistic tasks (or that a lot of additional manual work would be needed)\", \"when you claim that accuracy degrades from 1.21% to 1.49% if 2 instead of 11 time steps are used, you are comparing models with much different computational complexity: try to be more fair\", \"also, it would be interesting to show results for the larger model (400 neurons) with less time steps than 11\", \"Still, I consider the main idea interesting, and I believe it would lead to interesting discussions at the conference.\"]}", "{\"reply\": \"Q2: In response to your query, we have just completed a run with the encoder row magnitude bound set to 1/T, rather than 1.25/T. MNIST classification performance was 1.13%, rather than 1.08%. Although heuristic, the hyperparameters used in the paper were not the result of extensive hand-tuning.\"}", "{\"title\": \"review of Discriminative Recurrent Sparse Auto-Encoders\", \"review\": \"The paper describes the following variation of an autoencoder: An encoder (with relu nonlinearity) is iterated for 11 steps, with observations providing biases for the hiddens at each step. Afterwards, a decoder reconstructs the data from the last-step hiddens. In addition, a softmax computes class-labels from the last-step hiddens. The model is trained on labeled data using the sum of reconstruction and classification loss. To perform unsupervised pre-training the classification loss can be ignored initially.\\n\\nIt is argued that training the architecture causes hiddens to differentiate into two kinds of unit (or maybe a continuum): part-units, which mainly try to perform reconstruction, and categorical units, which try to perform classification. Various plots are shown to support this claim empirically.\\n\\nThe idea is interesting and original. The work points towards a direction that hasn't been explored much, and that seems relevant in practice and from the point of view of how classification may happen in the brain. Some anecdotal evidence is provided to support the part-categorical separation claim. The evidence seems interesting. Though I'm pondering still whether there may be other explanations for those plots. Training does seem to rely somewhat on finely tuned parameter settings like individual learning rates and weight bounds.\\n\\nIt would be nice to provide some theoretical arguments for why one should expect the separation to happen. A more systematic study would be nice, too, eg. measuring how many recurrent iterations are actually required for the separation to happen. To what degree does that separation happen with only pre-training vs. with the classification loss? And in the presence of classification loss, could it happen with shallow model, too? The writing and organization of the paper seems preliminary and could be improved. For example, it is annoying to jump back-and-forth to refer to plots, and some plots could be made more informative (see also comments below). \\n\\nThe paper seems to suggest that the model gradually transforms an input towards a class-template. I'm not sure if I agree, that this is the right view given that the input is clamped (by providing biases via E) so it is available all the time. Any comments? \\n\\nIt may be good to refer to 'Learning continuous attractors in recurrent networks', Seung, NIPS 1998, which also describes a recurrent autoencoder (though that model is different in that it iterates encoder+decoder not just encoder with clamped data).\\n\\n\\nQuestions/comments:\\n\\n- It would be much better to show the top-10 part units and the top-10 categorical units instead of figure 2, which shows a bunch of filters for which it is not specified to what degree they're which (except for pointing out in the text that 3 of them seem to be more like categorical units).\\n\\n- What happens if the magnitude of the rows of E is bounded simply by 1/T instead of 1.25/(T-1) ? (page 3 sentence above Eq. 4) Are learning and classification results sensitive to that value?\\n\\n- Last paragraph of section 1: 'through which the prototypes of categorical-units can be reshaped into the current input': Don't you mean the other way around?\\n\\n- Figure 4 seems to suggest that categorical units can have winner-takes-all dynamics that disfavor other categorical units from the same class. Doesn't that seem strange? \\n\\n- Section 3.2 (middle) mentions why S-I is plotted but S-I is shown and referred to before (section 3.1) and the explanation should instead go there.\\n\\n- What about the 2-step model result with 400 hiddens (end of section 4)?\"}", "{\"title\": \"review of Discriminative Recurrent Sparse Auto-Encoders\", \"review\": \"Summary and general overview:\\n----------------------------------------------\\nThe paper introduces Discriminative Recurrent Sparse Auto-Encoders, a new model, but more importantly a careful analysis of the behaviour of this model. It suggests that the hidden layers of the model learn to differentiate into a hierarchical structure, with part units at the bottom and categorical units on top. \\n\\nQuestions and Suggestions\\n----------------------------------------\\n1. Given equation (2) it seems that model is very similar to recurrent neural network with rectifier units as the one used for e.g. in [1]. The main difference would be how the model is being trained (the pre-training stage as well as the additional costs and weight norm constraints). I think this observation could be very useful, and would provide a different way of understanding the proposed model. From this perspective, the differentiation would be that part units have weak recurrent connections and are determined mostly by the input (i.e. behave as mlp units would), while categorical units have strong recurrent connections. I'm not sure if this parallel would work or would be helpful, but I'm wondering if the authors explored this possibility or have any intuitions about it.\\n\\n2. When mentioning that the model is similar to a deep model with tied weights, one should of course make it clear that additionally to tied weights, you feed the input (same input) at each layer. At least this is what equation (2) suggests. Is it the case? Or is the input fed only at the first step ? \\n\\n2. As Yoshua Bengio pointed out in his comment, I think Recurrent Networks, and hence DrSAE, suffer more from the vanishing gradient problem than deep forward models (contrary to the suggestion in the introduction). The reason is the lack of degrees of freedom RNNs have due to the tied weights used at each time step. If W for an RNN is moved such that its largest eigenvalue becomes small enough, the gradients have to vanish. For a feed forward network, all the W_i of different layers need to change such to have this property which seems a less likely event. IMHO, the reason for why DrSAE seem not to suffer too much from the vanishing gradient is due to (a) the norm constraint, and (b) the reconstruction cost found at each step which provide additional error signal. One could also say that 11 steps might not be a high enough number for vanishing gradient to make learning prohibitive. \\n\\n3. Could the authors be more specific when they talk about bounding the column-wise norm of a matrices. Is this done through a soft constraint added to the cost? Is it done, for e.g., by scaling D if the norm exceeds the chosen bound ? Is there a projection done at each SGD step ? It is not clear from the text how this works. \\n\\n4. The authors might have expected this from reviewers, but anyway. Could the authors validate this model (in a revision of the paper) on different datasets, beside MNIST? It would be useful to know that you see the same split of hidden units for more complex datasets (say CIFAR-10)\\n\\n5. The model had been run with only 2 temporal steps. Do you still get some kind of split between categorical and part hidden units ? Did you attempt to see how the number of temporal steps affect this division of units ?\", \"references\": \"[1] Yoshua Bengio, Nicolas Boulanger-Lewandowski, Razvan Pascanu, Advances in Optimizing Recurrent Networks, arXiv:1212.0901\"}", "{\"review\": \"Hi Jason and Yann,\\nThanks for the insightful reply.\\nBest,\\nRichard\"}", "{\"review\": \"* Jurgen Schidhuber:\\n\\nThank you very much for your constructive comments.\\n\\n1. Like the work of Pollack (1990), DrSAE is based on an recursive autoencoder that receives input on each iteration. However, (sequential) RAAMs iteratively add new information on each iteration, and then iteratively reconstruct the entire history of inputs from the resulting hidden state. In contrast, the input to DrSAE is the same on each iteration, and only the reconstruction based upon the final state is optimized. The entire recursive LISTA stack constitutes a single encoder, which is decoded in a single (linear) step. Whereas RAAMs perform unsupervised history compression, our goal is static autoencoding. Moreover, DrSAEs perform classification in addition to autoencoding; the logistic loss component is essential to the differentiation into categorical-units and part-units (RAAMs have no discriminative component). Finally, DrSAE's encoder is non-negative LISTA (a multi-layer network of rectified linear units, with tied parameters between the layers, and a projection from the input to all layers), its decoder is linear, and it makes use of a loss function including L1 regularization and logistic classification loss (RAAMs use a single-hidden-layer sigmoidal neural network without sparsification). RAAMs and DrSAEs are both recurrent and receive some sort of input on each iteration, but they have different architectures and solve different problems; they resemble each other only in the coarsest possible manner.\\n\\n2. Please see point 2(b) in response to reviewer Anonymous 8ddb; the references to the vanishing gradient problem were tangential, and have been removed.\\n\\n3. As you point out, it is well-known that data set augmentations (such as translations and elastic deformation of the input) and explicit regularization of the parameters to force the corresponding invariances (such as a convolutional network structure) improve the performance of machine learning algorithms of this type. It is similarly possible to improve performance by training many instances of the same network (perhaps on different subsets of the data) and aggregating their outputs. It is standard practice to separately report performance with and without making use of these techniques. Deformations can obviously be added in later to yield improved performance. We have added a note regarding the possibility of these augmentations, along with the appropriate citations.\"}", "{\"reply\": \"*Anonymous dd6a\\n\\nThank you very much for your helpful comments.\", \"p2\": \"Both the categorical-units and the part-units participate in reconstruction. Since the categorical-units become more active than the part-units (as per figure 7), they actually make a larger contribution to the reconstruction (evident in figure 9(b,c), where even the first step of the progressive reconstruction is strong).\", \"p4\": \"The differentiation into part-units and categorical-units does occur even with only two ISTA iterations (one pass through the explaining-away matrix), the shallowest architecture in which categorical-units can aggregate over part-units, as noted at the end of section 4. Without the classification loss, the network is an instance of (non-negative) LISTA, and categorical-units do not develop at all. Thus, only one recurrent iteration is required for categorical-units to emerge, and the classification loss is essential for categorical-units to emerge. We have added plots to figure 3 demonstrating these phenomena.\\n\\nWith regards to the theoretical cause of the differentiation into categorical-units and part-units, please see part 1 of our response to Yoshua Bengio.\\n\\nThe three plots at the end were intended to serve as supplementary materials. However, as you point out, these figures are important for the analysis presented in the text, so they have been moved into the main text.\", \"p5\": \"The network decomposes the input into a prototype and a sparse set of perturbations; we refer to these perturbations, encoded in the part-units, as the signal that 'transforms' the prototype into the input. That is, categorical + part ~ input. The input itself is not (and need not be) modified in the process of constructing this decomposition. The clamping of the input does not affect this interpretation.\", \"p6\": \"Thank you for the reference; we have included it in the paper. Of course, since Seung (1998) does not include a discriminative loss function, there is no reason to believe that categorical-units differentiated from part-units in his model.\", \"q1\": \"We have made the suggested change to figure 2. Filters sorted by categoricalness are also shown in figures 5, 6, 7, and 10.\", \"q2\": \"We have not yet undertaken a rigorous or extensive search of hyperparameter space. We expect the results with with the rows of E bounded by 1/T will be similar to those with a bound of 1.25/T. The (T-1) in the denominator of this bound in the paper was a typo, which we have corrected.\", \"q3\": \"The assertion that 'the prototypes of categorical-units are reshaped into the current input' is mathematically equivalent to 'the current input is reshaped into the prototypes of categorical-units.' In one case, categorical + part = input; in the other, input - part = categorical. Both interpretations are actively enforced by the reconstruction component of the loss function L^U in equation 1. Since the inputs are clamped, we find it most intuitive to think of the reconstruction due to the prototypes of the categorical-units being reshaped by the part-units to match the fixed input.\", \"q4\": \"When a chosen categorical-unit suppresses other categorical-units of the same class, it corresponds to the selection of a single prototype, which is both natural and desirable. It is easy to imagine that there may be classes with multiple prototypes, for which arbitrary linear combinations of the prototypes are not members of the class. For example, the sum of a left-leaning 1 and a right-leaning 1 is an X, rather than a 1.\", \"q5\": \"Indeed, the ISTA-mediated relationship between S-I and D^t*D is first discussed in the second paragraph of section 3. This is the clearest explanation for the use of S-I. We have removed other potentially-confusing, secondary justifications, and further clarified the intuitive basis of this primary justification.\", \"q6\": \"We have added the requested result on the 2-step model with 400 hiddens at the end of section 4. The trend is the same with 400 units as with 200 units. If the number of recurrent iterations is decreased from eleven to two, MNIST classification error in a network with 400 hidden units increases from 1.08% to 1.32%. With only 200 hidden units, MNIST classification error increases from 1.21% to 1.49%, although the hidden units still differentiate into part-units and categorical-units.\"}", "{\"reply\": \"* Anonymous bc93:\\n\\nWe offer our sincere thanks for your thoughtful comments.\", \"q1\": \"The dynamics are indeed smooth, as shown in figure 5. However, there is no reason to believe that the dynamics will stabilize beyond the trained interval. In fact, simulations past the trained interval show that the most active categorical unit often seems to grow continuously.\", \"q2\": \"The image reconstruction is small for the first iteration or two, but thereafter is stable throughout the trained interval and beyond. Classification is more sensitive to the exact balance between part-units and categorical-units, and is less reliable as one moves away from the trained iteration T.\", \"r1\": \"Any multilayer network (say with L layers of M units) can be seen as a recurrent network with M*L units, unrolled for L time steps, which is sparsely connected (e.g. with a block upper triangular matrix). Admittedly, this would be a computationally inefficient way to run the multilayer network. But the representational power of the two networks are identical. Hence recurrent nets are not intrinsically less powerful than multilayer ones, if one is willing to make them large. DrSAE leaves it up to the learning algorithm to decide which hidden units will act as 'lower-layer' or 'upper-layer' units.\", \"r2\": \"The reference to the vanishing gradients problem was tangential and, given its contentious nature, has been removed from the paper. Nevertheless, please see our comments on the matter to the other reviewers.\", \"r3\": \"The loss functions are indeed only applied to the last iteration of the hidden units. We have added an explicit mention of this in the text to avoid confusion. Future work will explore the use of a reconstruction cost summed over time. This may have the effect of quickening the convergence of the inference and making the classification and reconstruction more stable past the training interval.\", \"r4\": \"The T=11 could more appropriately called T'=10, since there are 10 applications of the explaining-away matrix S, although T=11 represents the number of applications of the non-lineaity. Experiments were conducted for T=2, T=6, and T=11. The paper focuses mostly on T=11. We have added a note to this effect.\", \"r5\": \"While the existence of a dichotomy between part-units and categorical-units is certainly identifiable without recourse to ISTA, as is evident from figures 8 and 10, the understanding of the part-units is best framed in terms of ISTA, which predicts the learned parameters with considerable accuracy. Were it not for the fact that our network architecture is derived from ISTA, it would be remarkable that the part-units spontaneously learn parameters that so closely match with ISTA.\\n\\nWhile perhaps unfamiliar to some readers, ISTA is simple and intuitive; we suspect that the difficulty you allude to is primarily an issue of nomenclature. With non-negative units, ISTA is just projected gradient descent on the loss function of equation 1 (the projection is onto the non-negativity constraint). We have added a note to this effect in paragraph 3.1, which we hope will make this analysis easier to follow for readers unfamiliar with ISTA.\", \"r6\": \"Please see our response to the other reviewers.\"}", "{\"review\": \"Interesting work! The use of relU units in an RNN is something I haven't seen before. I'd be interested in some discussion on how relU compares to e.g. tanh units in the recurrent setting. I imagine relU units may suffer less from vanishing/saturation during RNN training.\\n\\nWe have a related model (deep discriminative recurrent auto-encoders) for speech signal denoising, where the task is exactly denoising the input features instead of classification. It would be nice to better understand how the techniques you present can be applied in this type of regression setting as opposed to classification.\\n\\nAndrew L. Maas, Quoc V. Le, Tyler M. O'Neil, Oriol Vinyals, Patrick Nguyen, and Andrew Y. Ng. (2012). Recurrent Neural Networks for Noise Reduction in Robust ASR. Interspeech 2012.\", \"http\": \"//ai.stanford.edu/~amaas/papers/drnn_intrspch2012_final.pdf\"}", "{\"review\": \"Interesting implementation and results.\\n\\nBut how is this approach related to the original, unmentioned work on Recurrent Auto-Encoders (RAAMs) by Pollack (1990) and colleagues? What's the main difference, if any? Similar for previous applications of RAAMs to unsupervised history compression, e.g., (Gisslen et al, AGI 2011). \\n\\nThe vanishing gradient problem was identified and precisely analyzed in 1991 by Hochreiter's thesis http://www.bioinf.jku.at/publications/older/3804.pdf . The present paper, however, instead refers to other authors who published three years later.\", \"authors_write\": \"'MNIST classification error rate (%) for pixel-permutation-agnostic encoders' (best result: 1.08%). What exactly does that mean? Does it mean that one may not shift the input through eye movements, like in the real world? I think one should mention and discuss that without such somewhat artificial restrictions the best MNIST test error is at least 4 times smaller: 0.23% (Ciresan et al, CVPR 2012).\"}" ] }
0W7-W0EaA4Wak
Joint Training Deep Boltzmann Machines for Classification
[ "Ian Goodfellow", "Aaron Courville", "Yoshua Bengio" ]
We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches.
[ "classification", "new", "deep boltzmann machines", "prior methods", "initial learning pass", "deep boltzmann machine", "layer", "time" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=0W7-W0EaA4Wak
https://openreview.net/forum?id=0W7-W0EaA4Wak
ICLR.cc/2013/conference
2013
{ "note_id": [ "ua4iaAgtT2WVU", "g6eHAgMz5csdN", "nnKMnn0dlyqCD", "i4E0iizbl6uCv", "_B-UB_2zNqJCO", "uu7m3uY-jKu9P" ], "note_type": [ "review", "review", "review", "review", "review", "review" ], "note_created": [ 1362265800000, 1363214940000, 1362172860000, 1367449740000, 1363360620000, 1363234680000 ], "note_signatures": [ [ "anonymous reviewer b31c" ], [ "Ian J. Goodfellow, Aaron Courville, Yoshua Bengio" ], [ "anonymous reviewer 55e7" ], [ "Ian J. Goodfellow, Aaron Courville, Yoshua Bengio" ], [ "anonymous reviewer 55e7" ], [ "Ian J. Goodfellow, Aaron Courville, Yoshua Bengio" ] ], "structured_content_str": [ "{\"title\": \"review of Joint Training Deep Boltzmann Machines for Classification\", \"review\": \"This breaking-news paper proposes a new method to jointly train the layers of a DBM. DBM are usually 'pre-trained' in a layer-wise manner using RBMs, a conceivably suboptimal procedure. Here the authors propose to use a deterministic criterion that basically turns the DBM into a RNN. This RNN is trained with a loss that resembles that one of denoising auto-encoders (some inputs at random are missing and the task is to predict their values from the observed ones).\\n\\nThe view of a DBM as special kind of RNN is not new and the inpainting criterion is not new either, however their combination is. I am very curious to see whether this will work because it may introduce a new way to train RNNs that can possibly work well for image related tasks. I am not too excited about seeing this as a way to improve DBMs as a probabilistic model, but that's just my personal opinion. \\nOverall this work can be moderately original and of good quality.\\n\\nPros\\n-- clear motivation\\n-- interesting model\\n-- good potential to improve DBM/RNN training\\n-- honest writing about method and its limitation (I really like this and it is so much unlike most of the work presented in the literature). Admitting current limitations of the work and being explicit about what is implemented helps the field making faster progress and becoming less obscure to outsiders. \\n\\nCons\\n-- at this stage this work seems preliminary\\n-- formulation is unclear\", \"more_detailed_comments\": \"\", \"the_notation_is_a_bit_confusing\": \"what's the difference between Q^*_i and Q^*? Is the KL divergence correct? I would expect something like:\\nKL(DBM probability of (v_{S_i} | v_{-S_i}) || empirical probability of ( v_{S_i} | v_{-S_i}) ). I do not understand why P(h | v_{-S_i}) shows up there.\\n\\nIt would be nice to relate this method to denoising autoencoders. In my understanding this is the analogous for RNN-kind of networks.\\n\\nDoesn't CG make the training procedure more prone to overfitting on the minibatch? How many steps are executed?\\n\\nImportant details are missing. Saying that error rate on MNIST is X% does not mean much if the size of the network is not given.\\n\\nOverall, this is a good breaking news paper.\"}", "{\"review\": \"We have updated our paper and are waiting for arXiv to make the update public. We'll add the updated paper to this webpage as soon as arXiv makes the public link available.\", \"to_anonymous_reviewer_55e7\": \"-We'd like to draw to your attention that this paper was submitted to the workshops track. We agree with you that the results are very preliminary, which is why we did not submit it to the conference track. We know that the web interface for reviewers doesn't make it clear which track a paper was submitted to.\\n-We don't find the connection to NADE to be particularly meaningful, for the following reasons: \\n1) You can think of *any* model trained with maximum likelihood as learning to predict subsets of the inputs from each other. This is just a consequence of the chain rule of probability, p(x,y,z) = p(x)p(y|x)P(z|y,x). \\n2) For NADE, each variable appears only in one term of the cost function, and is always predicted given the same subset of other variables as input. In our algorithm each variable appears in an exponential number of terms, each with a different input set.\\n3) NADE defines the model such that P(v_i | v_1, ..., v_{i-1}) is just specified to be what you'd get by running one step of mean field in an RBM. NADE thus uses exact inference in the model that it is training. We use approximate inference, and we also run the mean field to convergence, rather than just doing one step.\\n4) A trained JDBM can easily predict any subset of variables given any other subset of variables, but NADE runs into problems with intractable inference for most queries. NADE is based on designing a model so that exact inference can compute P(v) easily, but this does not translate into estimating one half of v given the other half, because so many states need to be summed out. ie, to estimate P(v_n | v_1, ... v_k) NADE must explicitly sum over all joint assignments to v_k, ..., v_n-1. This is the case even for queries that follow the same structure as the NADE model. \\n5) NADE is based on exact maximum likelihood learning. Our algorithm is based on an approximation to pseudolikelihood learning.\", \"to_anonymous_reviewer_b31c\": \"-Yes, I wrote the wrong expression for the KL divergence. It's fixed now.\\n-Regarding denoising autoencoders, yes, it would be interesting to connect them. Some denoising autoencoders can be understood as doing score matching on RBMs. It's not clear how to extend that view of denoising autoencoders to the setting we explore in this paper (discrete rather than continuous variables, multiple hidden layers rather than one hidden layer).\\n-CG can overfit the minibatch, but you can compensate for this by using big minibatches. The original DBM paper already uses CG for the supervised fine tuning. Our best results were with 5 CG steps per minibatch of 1250 examples. We have updated the workshop paper to specify these details.\\n-The size of the network is the same as in the original DBM paper.\\n We have updated the paper to specify this.\"}", "{\"title\": \"review of Joint Training Deep Boltzmann Machines for Classification\", \"review\": \"The authors aim to introduce a new method for training deep Boltzmann machines. Inspired by inference procedure they turn the model into two hidden layers autoencoder with recurrent connections. Instead of reconstructing all pixels from all (perhaps corrupted) pixels they reconstruct one subset of pixels from the other (the complement).\\n\\nOverall this paper is too preliminary and there are too few experiments and most pieces are not new. However with better analysis and experimentation this might turn out to be very good architecture, but at this point is hard to tell.\\n\\nThe impainting objective is similar to denoising - one tries to recover original information from either subset of pixels or from corrupted image. So this is quite similar to denoising autoencoders. It is actually exactly the same as the NADE algorithm, which can be equivalently trained by the same criterion (reconstructing one set of pixels from the other - quite obvious) instead of going sequentially through pixels. The architecture is an autoencoder but a more complicated one then standard single layer - it has two (or more) hidden layers and is recurrent. In addition there is the label prediction cost. The idea of a more complicated encoding function, including recurrence, is interesting but certainly not new and neither is combining unsupervised and supervised criterion in one criterion. However if future exploration shows that this particular architecture is a good way of learning features, or that is specifically trains well the deep bolzmann machines, or it is good for some other problems then this work can be very interesting. However as presented, it needs more experiments.\"}", "{\"review\": \"We have posted an update to the arXiv paper, containing new material that we will present at the workshop.\"}", "{\"review\": \"Indeed I didn't notice this was a workshop paper, which then doesn't have to be as complete.\\n\\nStandard way to train nade is go in the fixed order. However you can also choose a random for each input (it leads to worse likelihood though). This is then equivalent to blanking random m pixels and predicting remaining n-m where n is the input size and m is chosen randomly from 0..n-1 with appropriate weighting.\"}", "{\"review\": \"The arXiv link now contains the second revision.\"}" ] }
7hPJygSqJehqH
Latent Relation Representations for Universal Schemas
[ "Sebastian Riedel", "Limin Yao", "Andrew McCallum" ]
Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of pre-existing databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present a family of matrix factorization models that predict affinity between database tuples and relations. We show that this achieves substantially higher accuracy than the traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms state-of-the-art distant supervision systems.
[ "relations", "schema", "schemas", "databases", "latent relation representations", "fixed", "finite target schema", "machine" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=7hPJygSqJehqH
https://openreview.net/forum?id=7hPJygSqJehqH
ICLR.cc/2013/conference
2013
{ "note_id": [ "VVGqfOMv0jV23", "00Bom31A5XszS", "HN_nN48xQYLxO" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362170580000, 1362259560000, 1363302420000 ], "note_signatures": [ [ "anonymous reviewer 129c" ], [ "anonymous reviewer 2d4e" ], [ "Andrew McCallum" ] ], "structured_content_str": [ "{\"title\": \"review of Latent Relation Representations for Universal Schemas\", \"review\": \"The paper studies techniques for inferring a model of entities and relations capable of performing basic types of semantic inference (e.g., predicting if a specific relation holds for a given pair of entities). The models exploit different types of embeddings of entities and relations.\\n\\nThe topic of the paper is interesting and the contribution seems quite sufficient for a workshop paper. It should motivate an interesting discussion on how these models can be generalized to be applied to more complex datasets and semantic tasks (e.g., inferring these representation from natural language texts), and, in general, on representation induction methods for semantic tasks. \\n\\nThe only concern I have about this paper is that it does not seem to properly cite much of the previous work on related subjects. Though it mentions techniques for clustering semantically similar expressions, it seems to suggest that there has not been much work on inducing, e.g., subsumptions. However, there has been a lot of previous research on learning entailment (aka inference) rules (e.g., Chkolvsky and Pantel 2004; Berant et al, ACL 2011; Nakashole et al, ACL 2012). Even more importantly, some of the very related work on embedding relations is not mentioned, e.g., Bordes et al (AAAI 2011), or, very closely related, Jenatton et al (NIPS 2012). However, these omissions may be understandable given the short format of the paper.\", \"pros\": \"-- Interesting topics\\n-- Fairly convincing experimental results\", \"cons\": \"-- Previous work on embedding relations is not discussed.\"}", "{\"title\": \"review of Latent Relation Representations for Universal Schemas\", \"review\": \"This paper presents a framework for open information extraction. This problem is usually tackled either via distant weak supervision from a knowledge base (providing structure and relational schemas) or in a totally unsupervised fashion (without any pre-defined schemas). The present approach aims at combining both trends with the introduction of universal schemas that can blend pre-defined ones from knowledge bases and uncertain ones extracted from free text.\\n\\nThis paper is very ambitious and interesting. The goal of bridging knowledges bases and text for information extraction is great, and this paper seems to go in the right direction. The experiments seem to show that mixing data sources is beneficial.\\n\\nThe idea of asymmetric implicature among relation is appealing but its implementation in the model remains unclear. How common is it that a tuple shares many relations? One can not tell anything for relations for which corresponding tuples are disjoint from the rest. \\n\\nThe main weakness of the system as it is presented here is that it relies on the fact that entities constituting tuples from the knowledge base (Freebase here) and tuples extracted from the text have been exactly matched beforehand. This is a huge limitation before any real application, because this involves solving a complex named entity recognition - word sense disambiguation - coreference resolution problem.\\n\\nIs there any parameter sharing between latent feature vectors of entities and tuples (=pairs of entities)? And between relation vectors and neighbors weights?\", \"minor\": \"the notation for the set of observed fact disappeared.\", \"pros\": [\"great motivation and research direction\", \"model and experiments are sound\"], \"cons\": [\"lack of details\", \"many unanswered questions remain to apply it on real-world data.\"]}", "{\"review\": \"This is a test of a note to self.\"}" ] }
gGivgRWZsLgY0
Clustering Learning for Robotic Vision
[ "Eugenio Culurciello", "Jordan Bates", "Aysegul Dundar", "Jose Carrasco", "Clement Farabet" ]
We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets.
[ "robotic vision", "clustering learning technique", "unsupervised learning technique", "network filters", "minutes", "set", "rameters", "goal", "technique" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=gGivgRWZsLgY0
https://openreview.net/forum?id=gGivgRWZsLgY0
ICLR.cc/2013/conference
2013
{ "note_id": [ "PiVQP7pKuhiR5", "-YucDnyrcVDfe", "NL-vN6tmpZNMh", "KOmskcVuMBOLt", "DGTnGO8CnrcPN" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1363392540000, 1364401500000, 1362195960000, 1362354720000, 1362366000000 ], "note_signatures": [ [ "Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet" ], [ "Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet" ], [ "anonymous reviewer 5eb5" ], [ "anonymous reviewer d6ae" ], [ "anonymous reviewer d2a7" ] ], "structured_content_str": [ "{\"review\": \"Dear reviewers, we have fixed all issues that you have reported in your kind review of the manuscript and uploaded a revision.\"}", "{\"review\": \"we accept the poster presentation, thank you for organizing this!\"}", "{\"title\": \"review of Clustering Learning for Robotic Vision\", \"review\": \"The paper presents an application of clustering-based feature learning ('CL') to image recognition tasks and tracking tasks for robotics. The basic system uses a clustering algorithm to train filters from small patches and then applies them convolutionally using a sum-abs-difference (instead of inner product) operation. This is followed with a fixed combination of processing stages (pooling, nonlinearity, normalization) and passed to a supervised learning algorithm. The approach is compared with 2-layer CNNs on image recognition benchmarks (StreetView house numbers, CIFAR10) and tracking (TLD dataset); in the last case it is shown that the method outperforms a 2-layer CNN from prior work. The speed of learning and test-time evaluation are compared as a measure of suitability for realtime use in robotics.\", \"the_main_novelty_here_appears_to_be_on_a_couple_of_points\": \"(1) the particular choice of architecture (which is motivated at least in part by the desire to run in programmable hardware such as FPGAs), (2) documenting the speed advantage and positive tracking results in applications, both of which are worthwhile goals. Evaluation and training speed, as the authors note, are not well-documented in deep learning work and this is a problem for real-time applications like robotics.\", \"some_questions_i_had_about_the_content\": \"I did not follow how the 2nd layer of clustered features were trained. It looks like these were trained on single channels of the pooled feature responses?\\n\\nWas the sum-abs-diff operation also used for the CNN?\\n\\nOne advantage of the clustering approach is that it is easier to train larger filter banks than with fine-tuned CNNs. Can the accuracy gap in recognition tasks be reduced by using more features? And at what cost to evaluation time?\", \"pros\": \"(1) Results are presented for a novel network architecture, documenting the speed and simplicity of clustering-based feature learning methods for vision tasks. It is hard to overstate how useful rapid training is for developing applications, so further results are welcome.\\n(2) Some discussion is included about important optimizations for hardware, but I would have liked more detail on this topic.\", \"cons\": \"(1) The architecture is somewhat unusual and it's not clear what motivates each processing stage.\\n(2) Though training is much faster, it's not clear to what degree the speed of training is useful for the robotics applications given. [As opposed to online evaluation time.]\\n(3) The extent of results is modest, and their bearing on actual work in robotics (or broader work in CV) is unclear. The single tracking result is interesting, but versus a baseline method.\\n\\nOverall, I think the 'cons' point to the robotics use-case not being thoroughly justified; but there are some ideas in here that would be interesting on their own with more depth.\"}", "{\"title\": \"review of Clustering Learning for Robotic Vision\", \"review\": \"I am *very* sympathetic to the aims of the authors:\\nFind simple, effective and fast deep networks to understand sensor data. The authors defer some of the more interesting bits to future work however: they note that sum-abs-diff should be much more efficient in silicon implementation then convolution style results. That would indeed be interesting, and would make this paper all the more exciting.\\n\\nMethods that make such learning efficient are certainly important.\\n\\nThe paper references [8] to explain seemingly significant details. At least a summary seems in order.\\nIdeally a pseudo-code description. Currently the paper is close to un-reproducible, which is unfortunate as it seems easy to correct that.\\n\\nDetails of the contrast normalization would make the paper more self contained. This is a rather specialized technique (by that name), and should be discussed when addressing the community at large.\\n\\nI can't parse the second paragraph of Section 2.3.\\n\\nMany of the details elements seem a bit unmotivated. The authors repeatedly mention in the beginning (which seem like details too early in the paper) things they *don't* do (Whitening, ZCA?, color space changes), but these seem like details, and the motivation to *not* perform them isn't compelling. Why the difference in connectivity between the CNN and that presented here?\\n\\nThe video data-sets are very interesting and compelling. It would be good for a sense of scale to report the results for [22,30] as well.\", \"the_fact_that_second_layers_that_are_random_still_works_well_is_interesting\": \"'The randomly connected 2nd layer used a fixed CNN layer as described in section 2.2.'\\n\\nWhy was this experiment with a random CNN, and not a random CL (sum-abs-diff) to match the experiments?\\n\\nWhat does one conclude from these results?\\nIt seems the first layer is, to a close approximate, equivalent to a Gabor filter bank. The second layer, random appears to be perfectly acceptable. (Truly random? How does randomly initialized from the data do?)\\nThat seems rather disappointing from a learning point of view.\\n\\nIn general the paper reads a bit like an early draft of a workshop paper. Interesting experiments, but hard to read, and seemingly incomplete.\", \"a_few_points_on_style_seem_in_order\": \"First the authors graciously acknowledge prior work and say 'we could not have done any of this work without standing on the shoulders of giants.' Oddly, one of the giants acknowledged is one of the authors. I assume this is some last minute mistake due to authorship changes, but it reflects the rushed and incomplete nature of the work.\\n\\nSimilarly, the advertisements for the laboratory in the footnotes are odd and out of place in a conference submission.\", \"the_title_seems_slightly_misleading\": \"this seems to be a straightforward ML for vision paper with otherwise no connection to robotics.\", \"pros\": \"Tacking important issues for the field.\\nGood and interesting experiments.\\nFocus on performance is important.\", \"cons\": \"Difficult to understand details of implementation.\\nMany design decisions make it hard to compare/contrast techniques and seem unmotivated.\\nSome of the most interesting work (demonstrating benefits of technique in performance) are deferred to future work.\\nStyle and writing make the paper difficult to read.\"}", "{\"title\": \"review of Clustering Learning for Robotic Vision\", \"review\": \"# Summary\\n\\nThis paper compares two types of filtering operator (linear filtering vs. distance filtering) in convolutional neural networks for image processing. The paper evaluates two fairly arbitrarily-chosen architectures on the CIFAR-10 and SVHN image labeling tasks, and shows that neither of these architectures is very effective, but that the conventional linear operator works better. The paper nevertheless advocates the use of the distance filtering operation on grounds of superior theoretical efficiency on e.g. FPGA hardware, but details of this argument and empirical substantiation are left for future work. The distance-based algorithm was more accurate than the linear-filtering architecture on a tracking task. How good the tracker is relative to other algorithms in the literature on the data set is not clear; I am admittedly not an expert in object tracking, and the authors simply state that it is 'not state-of-the-art.'\\n\\nThe paper's value as a report to roboticists on the merit of either clustering or linear operators is undermined by the lack of discussion or guidance regarding how one might go beyond the precise experiments done in the paper. The paper includes several choices that seem arbitrary: filter sizes, filter counts, numbers of layers, and so on. Moreover these apparently arbitrary choices are made differently for the different data sets. Compared to other papers dealing with these data sets, the authors have made the model much smaller, faster, and less accurate. The authors stress that the interest of their work is in enabling 'real-time' operation on a laptop, but I don't personally see the interest of targeting such CPUs for real-time performance and the paper does not argue the point.\\n\\nThe authors also emphasize the value of fast unsupervised learning based on clustering, but the contribution of this work beyond that of Coates et al. published in 2011 and 2012 is not clear.\\n\\n# Detailed Comments\\n\\nThe statement 'We used the Torch7 software for all our experiments [18], since this software can reduce training and learning of deep networks by 5-10 times compared to similar Matlab and Python tools.' sounds wrong to me. A citation would help defend the statement, but if you meant simply to cite the benchmarking from [18], then the authors should also cite follow-up work, particularly Bastien et al. 2012 ('Theano: new features and speed improvements').\\n\\nThe use of 'bio-inspired' local contrast normalization instead of whitening should include citation to previous work. (i.e. Why/how is the technique inspired by biology?)\\n\\nIs the SpatialSAD model the authors' own innovation? If so, more details should be listed. If not, a citation to a publication with more details should be listed. I have supposed that they are simply computing a squared Euclidean distance between filter and image patch as the filter response.\\n\\nRegarding the two architectures used for performance comparison - I am left wondering why the authors chose not to use spatial contrastive normalization in both architectures. As tested, performance differences could be attributed to *either* the CL or the spatial contrast normalization.\\n\\nI am a little confused by the phrase 'correlate filter responses to inputs' - with the sum-of-squared-differences operator at work, my intuition would be that inputs are less-correlated to filter responses than they would be with a convolution operator.\\n\\nThe use of the phrase 'fully connected' in the second-last paragraph on page 3 is confusing - I am assuming the authors mean all *channels* are connected to all *channels* in filters applied by the convolution operator. Usually in neural networks literature, the phrase 'fully connected' means that all *units* are connected between two layers.\\n\\nThe results included no discussion of measurement error.\"}" ] }
ACBmCbico7jkg
Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models
[ "Derek Rose", "Itamar Arel" ]
Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propose the use of supervised error signals from gradient descent to tune the established maps within the model. This technique allows us to learn what would otherwise be a design choice within the model and specialize the maps to aggregate areas of invariance for the task presented. Preliminary results show moderate potential gains in classification accuracy and highlight areas of importance within the intermediate feature representation space.
[ "task", "maps", "model", "gradient", "selection", "pattern recognition architecture", "visual pipeline models", "feature extraction", "pipeline" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=ACBmCbico7jkg
https://openreview.net/forum?id=ACBmCbico7jkg
ICLR.cc/2013/conference
2013
{ "note_id": [ "RRH1s5U_dcQjB", "cjiVGTKF7OjND", "55Y25pcVULOXK" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362378120000, 1362402060000, 1362378060000 ], "note_signatures": [ [ "anonymous reviewer 06d9" ], [ "anonymous reviewer f473" ], [ "anonymous reviewer 06d9" ] ], "structured_content_str": [ "{\"review\": \"NA\"}", "{\"title\": \"review of Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models\", \"review\": \"The paper proposes to learn the weights of the pooling region in a neural network for recognition. The idea is a good one, but the paper is a bit terse. Its not really clear what we are looking at in Figure 1b - the different quadrants and so forth - but I would guess the red blobs are the learned pooling regions. Its kind of what you expect, so it also begs the question of whether this teaches us anything new. But still it seems like a sensible approach and worth reporting. I suppose one can view it as a validation of the pooling envelope that is typically assumed.\"}", "{\"title\": \"review of Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models\", \"review\": \"The paper by Rose & Arel entitled 'Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models' describes a new approach for optimizing hyper parameters in spatial pyramid-like architectures.\\n\\nSpecifically, an architecture is presented which corresponds to a spatial pyramid where a two-layer neural net replaces the SVM in the final classification stage. The key contribution is to formulate the pooling operation in the spatial pyramid as a weighted sum over inputs which enables learning of the pooling receptive fields via back-propagation.\", \"pros\": \"The paper addresses an important problem in the field of computer vision. Spatial pyramids are currently quite popular in the computer vision community and optimizing the many free parameters which are normally tuned by hand is a key problem.\", \"cons\": \"The contribution of the present work remains very limited both in terms of the actual problem formulation and the empirical evaluation (no comparison to alternative approaches such as the recent work by Jia et al [ref 5] is shown). The overall 0.5% improvement in accuracy over non-optimized hyper-parameters is quite disappointing. In future work, the authors should compare their approach with alternative approaches in addition to suggesting significant improvement over non-optimized/standard parameters.\", \"additional_comments\": \"Additional references that could be added, discussed and/or used for benchmark. Y Boureau and J Ponce. A theoretical analysis of feature pooling in visual recognition. In ICML, 2010 and Pinto N, Doukhan D, DiCarlo JJ, Cox DD (2009). A High-Throughput Screening Approach to Discovering Good Forms of Biologically-Inspired Visual Representation. PLoS Computational Biology 5(11): e1000579. doi:10.1371/journal.pcbi.1000579.\"}" ] }
tFbuFKWX3MFC8
Training Neural Networks with Stochastic Hessian-Free Optimization
[ "Ryan Kiros" ]
Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates.
[ "optimization", "neural networks", "stochastic", "deep autoencoders", "recurrent networks", "conjugate gradient algorithm", "update directions", "products", "order" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=tFbuFKWX3MFC8
https://openreview.net/forum?id=tFbuFKWX3MFC8
ICLR.cc/2013/conference
2013
{ "note_id": [ "nYshYtAXG48ze", "mm_3mNH6nD4hc", "TF3miswPCQiau", "gehZgYtw_1v8S", "lcfIcbYPqX3P7", "av7x0igQwD0M-", "3nHzayPmAI5r1", "UJZtu0oLtcJh1", "CUXbqkRcJWqcy" ], "note_type": [ "review", "review", "review", "review", "review", "review", "comment", "review", "review" ], "note_created": [ 1364786880000, 1363601400000, 1362400260000, 1362161760000, 1367022720000, 1362494640000, 1363585560000, 1362391800000, 1360514640000 ], "note_signatures": [ [ "Ryan Kiros" ], [ "Ryan Kiros" ], [ "anonymous reviewer f834" ], [ "anonymous reviewer 0a71" ], [ "Ryan Kiros" ], [ "Ryan Kiros" ], [ "anonymous reviewer 0a71" ], [ "anonymous reviewer 4709" ], [ "Ryan Kiros" ] ], "structured_content_str": [ "{\"review\": \"I want to say thanks again to the conference organizers, reviewers and openreview.net developers for doing a great job.\", \"i_have_updated_the_code_on_my_webpage_to_include_two_additional_features\": \"max norm weight clipping and training deep autoencoders. Autoencoder training uses symmetric encoding / decoding and supports denoising and L2 penalties.\"}", "{\"review\": \"I have submitted an updated version to arxiv and should appear shortly. My apologies for the delay. From the suggestion of reviewer 0a71 I've renamed the paper to 'Training Neural Networks with Dropout Stochastic Hessian-Free Optimization'.\"}", "{\"title\": \"review of Training Neural Networks with Stochastic Hessian-Free Optimization\", \"review\": \"This paper looks at designing an SGD-like version of the 'Hessian-free' (HF) optimization approach which is applied to training shallow to moderately deep neural nets for classification tasks. The approach consists of the usual HF algorithm, but with smaller minibatches and with CG terminated after only 3-5 iterations. As advocated in [20], more careful attention is paid to the 'momentum-constant' gamma.\\n\\nIt is somewhat interesting to see a very data intensive method like HF made 'lighter' and more SGD-like, since this could perhaps provide benefits unique to both HF and SGD, but it's not clear to me from the experiments if there really is an advantage over variants of SGD that would perform some kind of automatic adaptation of learning rates (or even a fixed schedule!). The amount of novelty in the paper isn't particularly high since many of these ideas have been proposed before ([20]), although perhaps in less extreme or less developed forms.\", \"pros\": [\"takes the well-known approach HF in a different (if not entirely novel) direction\", \"seems to achieves performance competitive with versions of SGD used in [3] with dropout\"], \"cons\": \"- experiments don't look at particularly deep models and aren't very thorough\\n- comparisons to other versions of SGD are absent (this is my primary issue with the paper)\\n\\n----\\n\\n\\nThe introduction and related work section should probably clarify that HF is an instance of the more general family of methods sometimes known as 'truncated-Newton methods'.\\n\\nIn the introduction, when you state: 'HF has not been as successful for classification tasks', is this based on your personal experience, particularly negative results in other papers, or lack of positive results in other papers?\\n\\nMissing from your review are papers that look at the performance of pure stochastic gradient descent applied to learning deep networks, such as [15] did, and the paper by Glorot and Bengio from AISTATS 2010. Also, [18] only used L-BFGS to perform 'fine-tuning' after an initial layer-wise pre-training pass. \\n\\nWhen discussing the generalized Gauss-Newton matrix you should probably cite [7].\\n\\nIn section 4.1, it seems like a big oversimplification to say that the stopping criterion and overall convergence rate of CG depend on mostly on the damping parameter lambda. Surely other things matter too, like the current setting of the parameters (which determine the local geometry of the error surface). A high value of lambda may be a sufficient condition, but surely not a necessary one for CG to quickly converge. Moreover, missing from the story presenting in this section is the fact that lambda *must* decrease if the method is to ever behave like a reasonable approximation of a Newton-type method.\\n\\nThe momentum interpretation discussed in the middle of section 4, and overall the algorithm discussed in this paper, sounds similar to ideas discussed in [20] (which were perhaps not fully explored there). Also, a maximum iteration for CG is was used in the original HF paper (although it only appeared in the implementation, and was later discussed in [20]). This should be mentioned.\\n\\nCould you provide a more thorough explanation of why lambda seems to shrink, then grow, as optimization proceeds? The explanation in 4.2 seems vague/incomplete.\\n\\nThe networks trained seem pretty shallow (especially Reuters, which didn't use any hidden layers). Is there a particular reason why you didn't make them deeper? e.g. were deeper networks overfitting more, or perhaps underfitting due to optimization problems, or simply not providing any significant advantage for some other reasons? SGD is already known to be hard to beat for these kinds of not-very-deep classification nets, and while it seems plausible that the much more SGD-like HF which you are proposing would have some advantage in terms of automatic selection of learning rates, it invites comparison to other methods which do this kind of learning rate tuning more directly (some of which you even discuss in the paper). The lack of these kinds of comparisons seems like a serious weakness of the paper.\\n\\nAnd how important to your results was the use of this 'delta-momentum' with the particular schedule of values for gamma that you used? Since this behaves somewhat like a regular momentum term, did you also try using momentum in your SGD implementation to make the comparison more fair?\\n\\nThe experiments use drop-out, but comparisons to implementations that don't use drop-out, or use some other kind of regularization instead (like L2) are noticeably absent. In order understand what the effect of drop-out is versus the optimization method in these models it is important to see this.\\n\\nI would have been interested to see how well the proposed method would work when applied to very deep nets or RNNs, where HF is thought to have an advantage that is perhaps more significant/interesting than what could be achieved with well tuned learning rates.\"}", "{\"title\": \"review of Training Neural Networks with Stochastic Hessian-Free Optimization\", \"review\": \"Summary and general overview:\\n----------------------------------------------\\nThe paper tries to explore an online regime for Hessian Free as well as using drop outs. The new method is called Stochastic Hessian Free and is tested on a few datasets (MNIST, USPS and Reuters). \\nThe approach is interesting and it is a direction one might need to consider in order to scale to very large datasets.\", \"questions\": \"---------------\\n(1) An aesthetic point. Stochastic Hessian Free does not seem as a suitable name for the algorithm, as it does not mention the use of drop outs. I think scaling to a stochastic regime is an orthogonal issue to using drop outs, so maybe Drop-out Stochastic Hessian Free would be more suitable, or something rather, that makes the reader aware of the use of drop-outs.\\n\\n(2) Page 1, first paragraph. Is not clear to me that SGD scales well for large data. There are indications that SGD could suffer, for e.g., from under-fitting issues (see [1]) or early over-fitting (see [2]). I'm not saying you are wrong, you are probably right, just that the sentence you use seems a bit strong and we do not yet have evidence that SGD scales well to very large datasets, especially without the help of things like drop-outs (which might help with early-overfitting or other phenomena). \\n\\n(3) Page 1, second paragraph. Is not clear to me that HF does not do well for classification. Is there some proof for this somewhere? For e.g. in [3] a Hessian Free like approach seem to do well on classification (note that the results are presented for Natural Gradient, but the paper shows that Hessian Free is Natural Gradient due to the use of Generalized Gauss-Newton matrix).\\n\\n(4) Page 3, paragraph after the formula. R-operator is only needed to compute the product of the generalized Gauss-Newton approximation of the Hessian with some vector `v`. The product between the Hessian and some vector 'v' can easily be computed as d sum((dC/dW)*v)/dW (i.e. without using the R-op).\\n \\n(5) Page 4, third paragraph. I do not understand what you mean when you talk about the warm initialization of CG (or delta-momentum as you call it). What does it mean that hat{M}_\\theta is positive ? Why is that bad? I don't understand what this decay you use is suppose to do? Are you trying to have some middle ground between starting CG from 0 and starting CG from the previous found solution? I feel a more detailed discussion is needed in the paper. \\n\\n(6) Page 4, last paragraph. Why does using the same batch size for the gradient and for computing the curvature results in lambda going to 0? Is not obvious to me. Is it some kind of over-fitting effect? If it is just an observation you made through empirical experimentation, just say so, but the wording makes it sound like you expect this behaviour due to some intuitions you have.\\n \\n(7) Page 5, section 4.3. I feel that the affirmation that drop-outs do not require early stopping is too strong. I feel the evidence is too weak at the moment for this to be a statement. For one thing, \\beta_e goes exponentially fast to 0. \\beta_e scales the learning rate, and it might be the reason you do not easily over-fit (when you reach epoch 50 or so you are using a extremely small learning rate). I feel is better to make this as an observation. Also could you maybe say something about this decaying learning rate, is my understanding of \\beta_e correct? \\n \\n(8) I feel a important comparison would be between your version of stochastic HF with drop-outs vs stochastic HF (without the drop outs) vs just HF. From the plots you give, I'm not sure what is the gain from going stochastic, nor is it clear to me that drop outs are important. You seem to have the set-up to run this additional experiments easily.\", \"small_corrections\": \"--------------------------\\nPage 1, paragraph 1, 'salable` -> 'scalable'\\nPage 2, last paragraph. You wrote : 'B is a curvature matrix suc as the Hessian'. The curvature of a function `f` at theta is the Hessian (there is no choice) and there is only one curvature for a given function and theta. There are different approximations of the Hessian (and hence you have a choice on B) but not different curvatures. I would write only 'B is an approximation of the curvature matrix` or `B is the Hessian`.\", \"references\": \"[1] Yann N. Dauphin, Yoshua Bengio, Big Neural Networks Waste Capacity, arXiv:1301.3583\\n[2] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent and Samy Bengio, Why Does Unsupervised Pre-training Help Deep Learning? (2010), in: Journal of Machine Learning Research, 11(625--660)\\n[3] Razvan Pascanu, Yoshua Bengio, Natural Gradient Revisited, arXiv:1301.3584\"}", "{\"review\": \"Dear reviewers,\\n\\nTo better account for the mentioned weaknesses of the paper, I've re-implemented SHF with GPU compatibility and evaluated the algorithm on the CURVES and MNIST deep autoencoder tasks. I'm using the same setup as in Chapter 7 of Ilya Sutskever's PhD thesis, which allows for comparison against SGD, HF, Nesterov's accelerated gradient and momentum methods. I'm going to make one final update to the paper before the conference to include these new results.\"}", "{\"review\": \"Thank you for your comments!\", \"to_anonymous_0a71\": \"---------------------------------\\n\\n(1,8): I agree. Indeed, it is straightforward to add an additional experiment without the use of dropout. At the least, the experimental section can be modified to indicate whether the method is using dropout or not instead of simply referring to 'stochastic HF'.\\n\\n(2): Fair point. It would be interesting trying this method out in a similar experimental setting as [R1]. Perhaps it may give some insight on the paper's hypothesis that the optimization is the culprit to underfitting.\\n\\n(3): Correct me if I'm wrong but the only classification results of HF I'm aware of are from [R2] in comparison with Krylov subspace decent, not including methods that refer to themselves as natural gradient. Minibatch overfitting in batch HF is problematic and discussed in detail in [R5], pg 50. Given the development of [R3], the introduction could be modified to include additional discussion regarding the relationship with natural gradient and classification settings.\\n\\n(5): Section 4.5 of [R4] discusses the benefits of non-zero CG initializations. In batch HF, it's completely reasonable to fix gamma throughout training (James uses 0.95). This is problematic in stochastic HF due to such a small number of CG iterations. Given a non-zero CG initialization and a near-one gamma, hat{M}_\\theta may be more likely to remain positive after CG and assuming f_k - f_{k-1} < 0, means that the reduction ratio will be negative and thus lambda will be increased to compensate. This is not necessarily a bad thing, although if it happens too frequently the algorithm will began to behave more like SGD (and in some cases the linesearch will reject the step). Setting gamma to some smaller initial value and incrementing at each epoch, based on empirical performance, allows for near-one delta values late in training without negating the reduction ratio. I refer the reader to pg.28 and pg.39 in [R5], which give further motivation and discussion on these topics.\\n\\n(6): Using the same batches for gradients and curvature have some theoretical advantages (see section 12.1, pg.48 of [R5] for derivations). While lambda -> 0 is indeed an empirical observation, James and Ilya also report similar behaviour for shorter CG runs (although longer than what I use) using the same batches for gradients and curvature (pg.54 of [R5]). Within the proposed stochastic setting, having lambda -> 0 doesn't make too much sense to me (at least for non-convex f). It could allow for much more aggressive steps which may or may not be problematic given how small the curvature minibatches are. One solution is to simply increase the batch sizes, although this was something I was intending to avoid.\\n\\n(7): The motivation behind \\beta_e was to help achieve more stable training over the stochastic networks induced using dropout. You are probably right that 'not requiring early stopping' is way too strong of a statement.\", \"to_anonymous_4709\": \"---------------------------------\\n\\nDue to the additional complexity of HF compared to SGD, I attempted to make my available (Matlab) code as easy as possible to read and follow through in order to understand and reproduce the key features of the method.\\n\\nWhile an immediate advantage of stochastic HF is not requiring tuning learning rate schedules, I think it is also a promising approach in further investigating the effects of overfitting and underfitting with optimization in neural nets, as [R1] motivates. The experimental evaluation does not attack this particular problem, as the goal was to make sure stochastic HF was at least competitive with SGD dropout on standard benchmarks. This to me was necessary to justify further experimentation.\\n\\nThere is no comparison with the results of [R4] since the goal of the paper was to focus on classification (and [R4] only trains on deep autoencoders). Future work includes extending to other architectures, as discussed in the conclusion.\\n\\nI mention on pg. 7 that the per epoch update times were similar to SGD dropout (I realize this is not particularly rigorous). \\n\\nIn regards to evaluating each of the modifications, I had hoped that the discussion was enough to convey the importance of each design choice. I realize now that there might have been too much assumption of information discussed in [R5]. These details will be made clear in the updated version of the paper with appropriate references.\", \"to_anonymous_f834\": [\"--------------------------------\", \"Thanks for the reference clarifications. In regards to classification tasks, see (3) in my response to Anonymous 0a71.\", \"Indeed, much of the motivation of the algorithm, particularly the momentum interpretation, came from studying [R5] which expands on HF concepts in significantly more detail then the first publications allowed for. I will be sure to make this more clear in the relevant sections of the paper.\", \"I agree that not comparing against other adaptive methods is a weakness and discussed this briefly in the conclusion. To accommodate for this, I tried to use an SGD implementation that would at least be as competitive (dropout, max-norm weight clipping with large initial rates, momentum and learning rate schedules). Weight clipping was also shown to improve SGD dropout, at least on MNIST [R6].\", \"Unfortunately, I don't have too much more insight on the behaviour of lambda though it appears to be quite consistent. The large initial decrease is likely to come from conservative initialization of lambda which works well as a default.\", \"I did not test on deeper nets largely due to time constraints (it made more sense to me to start on shallower networks then to 'jump the gun' and go straight for very deep nets) . Should I not have done this? As alluded to in the conclusion, I wouldn't be expecting any significant gain on these datasets (perhaps I'm wrong here). It would be cool to try on some speech data where deeper nets have made big improvements but I haven't worked with speech before. Reuters didn't use hidden layers due to the high dimensionality of the inputs (~19000 log word count features). Applying this to RNNs is a work in progress.\", \"----------------------------------------------\", \"To summarize (modifications for the paper update):\", \"include additional references\", \"add results for stochastic HF with no dropout\", \"some additional discussion on the relationship with natural gradient (and classification results)\", \"better detail section 4, including additional references to [R5]\", \"These modifications will be made by the start of next week (March 11).\"], \"one_additional_comment\": \"after looking over [R6], I realized the MNIST dropout SGD results (~110 errors) were due to a combination of dropout and the max-norm weight clipping and not just dropout alone. I have recently been exploring using weight clipping with stochastic HF and it is advantageous to include it. This is because it allows one to start training with smaller lambda values, likely in the same sense as it allows SGD to start with larger learning rates. I will be updating the code shortly to include this option.\\n\\n\\n[R1] Yann N. Dauphin, Yoshua Bengio, Big Neural Networks Waste Capacity, arXiv:1301.3583\\n[R2] O. Vinyals and D. Povey. Krylov subspace descent for deep learning. arXiv:1111.4259, 2011\\n[R3] Razvan Pascanu, Yoshua Bengio, Natural Gradient Revisited, arXiv:1301.3584\\n[R4] J. Martens. Deep learning via hessian-free optimization. In ICML 2010.\\n[R5] J. Martens and I. Sutskever. Training deep and recurrent networks with hessian-free optimization. Neural Networks: Tricks of the Trade, pages 479\\u2013535, 2012.\\n[R6] N. Srivastava. Improving Neural Networks with Dropout. Master's thesis, University of Toronto, 2013.\"}", "{\"reply\": \"Regarding using HF for classification. My point was that lack of results in the literature about classification error with HF might be just due to the fact that this is a new method, arguably hard to implement and hence not many had a chance to play with it. I'm not sure that just using HF (the way James introduced it) would not do well on classification. I feel I didn't made this clear in my original comment. I would just remove that statement. Looking back on [R2] I couldn't find a similar statement, it only says that empirically KSD seems to do better on classification.\\n\\nAlso I see you have not updated the arxiv papers. I would urge you to do so, even if you do not have all the new experiments ready. It would be helpful for us the reviewers to see how you change the paper.\"}", "{\"title\": \"review of Training Neural Networks with Stochastic Hessian-Free Optimization\", \"review\": \"This paper makes an attempt at extending the Hessian-free learning work to a stochastic setting. In a nutshell, the changes are:\\n\\n- shorter CG runs\\n- cleverer information sharing across CG runs that has an annealing effect\\n- using differently-sized mini-batches for gradient and curvature estimation (former sizes being larger)\\n- Using a slightly modified damping schedule for lamdba than Martens' LM criteria, which encourages fewer oscillations.\\n\\nAnother contribution of the paper is the integration of dropouts into stochastic HF in a sensible way. The authors also include an exponentially-decaying momentum-style term into the parameter updates.\\n\\nThe authors present but do not discuss results on the Reuters dataset (which seem good). There is also no comparison with the results from [4], which to me would be a natural thing to compare to.\\n\\nAll in all, a series of interesting tricks for making HF work in a stochastic regime, but there are many questions which are unanswered. I would have liked to see more discussion *and* experiments that show which of the individual changes that the author makes are responsible for the good performance. There is also no discussion on the time it takes the stochastic HF method to make on step / go through one epoch / reach a certain error. \\n\\nSGD dropout is a very competitive method because it's fantastically simple to implement (compared to HF, which is orders of magnitude more complicated), so I'm not yet convinced by the insights of this paper that stochastic HF is worth implementing (though it seems easy to do if one has an already-running HF system).\"}", "{\"review\": \"Code is now available: http://www.ualberta.ca/~rkiros/\\n\\nIncluded are scripts to reproduce the results in the paper.\"}" ] }
2rHk2kZ5knTJ6
A Geometric Descriptor for Cell-Division Detection
[ "Marcelo Cicconet", "Italo Lima", "Davi Geiger", "Kris Gunsalus" ]
We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the two-steps piecewise constant function that best fits the sequence of centroids determines the frame where the division occurs.
[ "detection", "geometric descriptor", "centroids", "sequence", "descriptor", "processing network", "wavelet filtering", "test", "mirror symmetry", "pairs" ]
reject
https://openreview.net/pdf?id=2rHk2kZ5knTJ6
https://openreview.net/forum?id=2rHk2kZ5knTJ6
ICLR.cc/2013/conference
2013
{ "note_id": [ "UvnQU-IxtJfA2", "ddQbtyHpiUz9Z", "uVT9-IDrqY-ci" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362163620000, 1362034500000, 1362198120000 ], "note_signatures": [ [ "anonymous reviewer ba30" ], [ "David Warde-Farley" ], [ "anonymous reviewer 3bab" ] ], "structured_content_str": [ "{\"title\": \"review of A Geometric Descriptor for Cell-Division Detection\", \"review\": \"Goal: automatically spot the point in a video sequence where a cell-division occurs.\\n\\nInteresting application of deep networks.\"}", "{\"review\": \"The proposed method appears to be an engineered descriptor that doesn't involve any learning. While the application is interesting, ICLR is probably not an appropriate venue.\"}", "{\"title\": \"review of A Geometric Descriptor for Cell-Division Detection\", \"review\": \"This paper aims to annotate the point at which cells divide in a video sequence.\", \"pros\": [\"a useful and interesting application.\"], \"cons\": [\"it does not seem to involve any learning, it clearly does not fit at ICLR.\", \"no comparison to other systems nor description of the dataset, nor cross-validation.\", \"the results are not that impressive considering they are not that far from the results of a simple image difference. I think a learnt model would perform better at this task.\"]}" ] }
4eEO5rd6xSevQ
Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
[ "Sebastian Hitziger", "Maureen Clerc", "Alexandre Gramfort", "Sandrine Saillet", "Christian Bénar", "Théodore Papadopoulo" ]
Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data.
[ "dictionary", "features", "application", "atoms", "signals", "case", "neuroelectric signals", "powerful tool" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=4eEO5rd6xSevQ
https://openreview.net/forum?id=4eEO5rd6xSevQ
ICLR.cc/2013/conference
2013
{ "note_id": [ "KktHprTPH5p6q", "Gp6ETkwghDG9l", "3yWm3DNg8o3fu", "zo2FGvCYFkoR4", "HrUgwafkmVrpB", "9CrL9uhDy_qlF", "DJA5lKoL8-lLY", "NjApJLTlfWxlo", "DdhjdI7FMGDFT" ], "note_type": [ "review", "comment", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1363851960000, 1363646700000, 1363126320000, 1362402300000, 1363533480000, 1363533540000, 1362362340000, 1362376680000, 1363533480000 ], "note_signatures": [ [ "anonymous reviewer 8b9c" ], [ "anonymous reviewer 8ed7" ], [ "Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine Saillet, Christian Bénar, Théodore Papadopoulo" ], [ "anonymous reviewer 8b9c" ], [ "Aaron Courville" ], [ "Aaron Courville" ], [ "anonymous reviewer 8ed7" ], [ "anonymous reviewer 5e7a" ], [ "Aaron Courville" ] ], "structured_content_str": [ "{\"review\": \"One additional comment is that the work bears some similarities to Hinton's recent work on 'capsules' and it may be worth citing that paper:\\n\\nHinton, G. E., Krizhevsky, A. and Wang, S. (2011)\\nTransforming Auto-encoders.\", \"icann_11\": \"International Conference on Artificial Neural Networks, Helsinki.\", \"http\": \"//www.cs.toronto.edu/~hinton/absps/transauto6.pdf\"}", "{\"reply\": \"The authors have improved the paper, addressing many of the issues I brought up. I would modify my review to be Neutral; if that is not an acceptable evaluation, then I modify my review to a Weak Accept. I am only posting this response to the poster asking for an updated evaluation, because I am not sure if I am supposed to make this modification public.\", \"i_still_have_a_couple_of_comments\": \"1. The authors include a description of the convolution sparse coding techniques, such as SISC, which better compares their contribution to related work. SISC is not a real competitor to JADL, because it is too computationally intensive; however, in the synthetic experiments, it would be useful to include it in the comparison. If SISC outperformed JADL, it would not invalidate the usefulness of JADL (which is the only one that can be applied to large datasets), but would give a much better understanding of the properties of JADL versus these previous convolution approaches.\\n\\n2. The paper is over length, but I assume that will be fixed.\"}", "{\"review\": \"We thank the reviewers for their constructive comments. We submitted a new version of the paper to arXiv, which should be made available on Wednesday, March 13. As one major change we now point out the similarity to convolutional/shift-invariant sparse coding (SISC)*, but also mention the differences mainly introduced by the l_0 sparsity constraint. A new contribution is an analysis of the algorithm's complexity as well as possibilities for speed ups \\u2013 although the computation time was already low for the conducted experiments, this could become an issue for real-time analysis. The changes in detail:\\n\\n \\n[1] Smith, Evan, and Michael S. Lewicki. 'Efficient coding of time-relative structure using spikes.' Neural Computation 17.1 (2005): 19-45.\\n[2] Blumensath, Thomas, and Davies, Mike. 'Sparse and shift-invariant representations of music.' Audio, Speech, and Language Processing, IEEE Transactions on 14.1 (2006): 50-57.\\n[3] R. Grosse, R. Raina, H. Kwong, and AY Ng, 'Shift-invariant sparse coding for audio classification,' in Proceedings of the Twenty-third Conference on Uncertainty in Artificial Intelligence (UAI'07), 2007\\n[4] Ekanadham, Chaitanya, Daniel Tranchina, and Eero P. Simoncelli. 'Sparse decomposition of transformation-invariant signals with continuous basis pursuit.' Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011.\\n\\nIntroduction\\nThe second part of the introduction has been rewritten. Shift-invariant sparse coding (SISC) is introduced and its differences to JADL are pointed out. Most significant is the constraint in JADL, that only one shifted version of each atom shall be active per signal. As a consequence, JADL leads to a less complex algorithm (in both, sparse coding and dictionary update step), which in contrast to SISC does not need heuristic preselection of active atoms. In addition, we remarked that JADL is designed to learn only few atoms, in contrast to most dictionary learning applications. Hence, the term \\u201csparsity\\u201d only makes sense with respect to the \\u201cunrolled\\u201d dictionary. However, for the most part this sparsity is achieved by the explicit constraint, not by sparse regularization.\\n\\nSection 3, JADL\\nIn the sparse coding section, the computational advantages of the modified LARS have been pointed out and contrasted with SISC.\\n\\nIn the dictionary update section, we noted that the JADL formulation leads to an update of same complexity as regular DL; this does not apply to SISC. This fact could be used by changing the ratio of sparse coding and dictionary update steps in favor to the latter, i.e. by employing mini-batch or online techniques.\\n\\nSection 4, Experiments\\nFor the real data, the computation time has been investigated for different K (number of atoms) and S (number of allowed shifts). The curve shows linear correlation with S; when increasing K, however, the computation time increases more than linear. This is due to the following: While both S and K affect the size of the unrolled dictionary, an increase in S is handled efficiently in the JADL formulation, as mentioned in the sections sparse coding and dictionary update above. We also mentioned that the computation time for the conducted experiments was very small (4.3 seconds for the real data), hence computational complexity should not become an issue for offline analysis. Employing the proposed speed ups and a further optimization of the code, on the other hand, could even allow for real time analysis (this could be desirable especially for M/EEG-based brain computer interfaces (BCI)) .\\n\\nDetailed responses to the reviewers' comments\\n-------------------------------------------------------------\\n\\nAnonymous 8ed7\\n-------------------------\\n\\nCons\\n1. [Computational requirements] As mentioned above, we investigated computation time empirically for different values of S and K. Even for K=15 and S=200 (i.e. 3000 elements in the unrolled dictionary ) the computation time remained less than 1 minute for 200 iterations. Therefore, the increased complexity should only matter for real-time applications, for which we proposed several speed-ups in the formulation of the algorithm.\\n\\n2. (a) [Examples for shifts] We added a comment (footnote) on possible definitions of the shifts in the problem statement, there are different ways how to define the shifts at the borders. As the JADL framework is general enough to allow even arbitrary linear transforms, we do not specify a certain definition at this point. A detailed discussion of the right way to handle boundary effects would be out of the scope of the paper; however, we found that in our experiments the choice of the shifts did not affect the outcome significantly. \\n\\n(b) [Examples for types of data that can be analyzed with JADL] In the introduction, we now mention similar properties to those of neuroelectric data in different bioelectric or biomagnetic signals, such as ECG, EMG. [explain \\u201cfeatures well-aligned across signals\\u201d] We changed this formulation to \\u201ceach waveform of interest occurs approximately at the same time in every trial\\u201d. Is this sufficiently clear?\\n(c) [Improve explanation how to enforce constraint (7)] changed as suggested\\n\\n3. [On the importance of lambda] We agree that if the number of atoms is large, the parameter lambda has an important role to ensure sparsity. However, JADL is designed to learn only a small number K of atoms. This is due to several reasons: (i) the jitter-adaptivity ensures a compact representation, as a waveform that is shifted throughout signals can be encoded in a single atom; (ii) for the applications JADL is aimed at, it is either not desired or not feasible to learn many atoms, since (a) the dictionary should be easily interpretable and reveal the main activity across signals (b) the number of training examples M is limited and K<<M must hold to prevent from overfitting, which is a particularily critical aspect due to often high noise level. A similar comment on the different use of \\u201csparsity\\u201d in JADL has been added to the introduction to get this point clear.\\n\\n4. [Comparison to common DL with large K and large lambda] We agree that a comparison to DL with a similar K used as in JADL would not be fair. For the simulated data, different values of K have therefore been used, and lambda has been optimized individually for each K to yield the smallest error/highest similarity w.r.t. the ground truth. The table stops at K=12; for larger K performance becomes worse for all three methods. This is now pointed out in the new version.\\n\\nMinor Comments\\n\\n2. We changed the claim from \\u201cthe biggest challenge\\u201d to \\u201can important challenge\\u201d and provided references.\\n3. changed as suggested\\n4. changed as suggested\\n\\n\\nAnonymous 5e7a\\n-------------------------\\n\\nWe agree that a comparison to previous work on convolutional/shift-invariant sparse coding is necessary and hope that the changes made as described above make the similarities and differences to between SISC and JADL sufficiently clear. We found that most papers on SISC do not address the problem of the dictionary update but only focus on sparse coding. An exception was [3], from which it becomes clear that for SISC the dictionary update is a non-trivial problem with increased computational complexity.\\n\\nAnonymous 8b9c\\n-------------------------\\n\\nSee comment above for comparison to SISC.\\n\\nIn fact, the LFP data does not contain much more structure than the spikes. Hence, the learned dictionaries look quite redundant and its analysis provides only limited insight in the data. However, we think that the visualization of the code reveals that although the dictionary looks redundant, important differences have been picked up, leading to contiguous sets of epochs dominated by the same atom.\"}", "{\"title\": \"review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals\", \"review\": \"The paper proposes a method for learning shiftable dictionary elements - i.e., each dictionary is allowed to shift to its optimal position to model structure in a signal. Results on test data show a significant improvement over regular sparse coding dictionary learning for recovering structure in data, and results on LFP data provide a more interpretable result.\\n\\nThis seems like a sensible approach and the results are pretty convincing. The choice of data seems a bit odd - all the LFP waveforms look the same, perhaps it would be worthwhile to expand the waveform so we can see more structure than just a spike. \\n\\nThis approach could be easily confused for a convolution model. The difference here is that the coefficients are mutually exclusive over shift. the authors may want to point out the similarities and differences to a convolution sparse coding model for the reader.\"}", "{\"review\": \"Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper?\"}", "{\"review\": \"Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper?\"}", "{\"title\": \"review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals\", \"review\": \"This paper introduces a dictionary learning technique that incorporates time delays or shifts on the learned dictionary, called JADL, to better account for this structure in multi-trial neuroelectric signals. The algorithm uses the previous dictionary learning framework and non-convex optimization, but adds a selection step over possible shifts for each atom (for each point), framed as an l-0 optimization. This objective is the main contribution of the paper, which enables better performance for time-delayed data as well as potentially useful temporal structure to be extracted from the data.\\n\\nThe paper introduces a novel objective for addressing the time shift problem (e.g in M/EEG data), but frames a typical coordinate descent approach for solving the resulting non-convex problem. The main difference in the optimization is (1) ensuring that the coefficients, a, for the dictionary, D, block-wise satisfy the l-0 constraint by disabling updates to all but one coefficient within a block and (2) modifying the gradient update in block coordinate descent on the dictionary, D, which now has a shift operator around D. Taking this obvious solution route leads to a non-convex optimization and potentially lengthy computation (as the delta set size increases). The quality of writing and experiments is high.\\n\\nPros\\n1. The proposed JADL algorithm facilitates application of dictionary learning techniques to M/EEG data, which is an important application. Moreover, as a secondary benefit, it allows time-delay structure to be learned.\\n\\n2. The writing is mostly clear and the paper is well organized.\\n\\n3. Experimental results are comprehensive and include important details of their experimental procedure.\\n\\nCons\\n1. The computational requirements of this algorithm are not explored, though the larger dictionary in JADL (due to the addition of delta shifts) could significantly slow learning.\\n\\n2. For clarity: (a) Include examples of shifts, Delta, in the problem statement (such as the ones used in the experiments). (b) Include examples of the types of data that could benefit from this framework, to better justify the importance of framing the problem with time-shifts and better explain what is meant by 'features [being] well-aligned across signals'. (c ) The explanation of how to enforce constraint (7) should be improved, e.g. 'block all other coefficients a_j^{S,i}' should probably be 'block all other coefficients in segment a_j^{S,i}', but the meaning is actually significantly different and this was quite confusing.\\n\\n3. The comment that the parameter, lambda, is no longer important because sparsity is induced by the constraint in (7) suggests that as the size of the set of delta increases, this problem formulation no longer learns a sparse solution over the chosen dictionary. I would suggest that this is not the case, but rather that the datasets used in this paper had a small dictionary and did not require the coefficients to be sparse. The constraint in (7) simply ensures that only one delta is chosen per atom, but does not guarantee that the final solution over the delta-shifted dictionary will be sparse. Therefore, if the number of atoms is large, the regularizer || a_j ||_1 should still be important. It is true that constraint (7) ensures the solution is sparse over all possible delta-shifted dictionaries; this is however an unfair comparison to other dictionary learning techniques which have a much smaller dictionary space to weight over.\\n\\n4. Con (3) suggests that learning with a very large dictionary (the size of the delta-shifted dictionary set) and setting the lambda parameter large might have more comparable performance to the algorithm suggested in this paper and should be included. Of course, this highly regularized approach on a large dictionary would not explicitly provide the time shift structure in the data as does JADL, but would be an interesting and more fair comparison. However, if the time-shift structure is not actually useful (and is simply used to improve learning), then DL with a large dictionary and large regularization parameter, lambda, could be all that is needed to deal with this problem for EEG data. The authors should clarify this difference and contribution more clearly.\", \"minor_comments\": \"1. For a reference on convex solution to the dictionary learning problem, see 'Convex sparse matrix factorizations', F. Bach, J. Mairal and J. Ponce. 2008; and 'Convex Sparse Coding, Subspace Learning and Semi-Supervised Extensions', X. Zhang, Y. Yu, M. White, R. Huang and D. Schuurmans. 2011.\\n2. There should be citations for the claim: 'This issue is currently the biggest challenge\\nin M/EEG multi-trial analysis.'\\n3. Bottom of page 3: 'which allows to solve it' -> 'which allows us to solve it'\\n4. Page 5: 'property allows to' -> 'property allows us to'\"}", "{\"title\": \"review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals\", \"review\": \"This paper introduces a sparse coding variant called 'jitter-adaptive' sparse coding, aimed at improving the efficiency of sparse coding by augmenting a dictionary with temporally shifted elements. The motivating use case is EEG data, where neural activity can arise at any time, in atoms that span multiple recording channels. Ideally these motifs would be recognized as the dictionary components by a dictionary-learning algorithm.\\n\\nEEG data has been analyzed with sparse coding before, as noted by the authors, and the focus of this paper is the use of jitter-adaptive dictionary learning to achieve a more useful signal decomposition. The use of jitter adaptive dictionary learning is indeed an intuitive and effective strategy for recovering the atoms of synthetic and actual data.\\n\\nOne weakness of this paper is that the technique of augmenting a dictionary by a time-shifting operator is not entirely novel, and the authors should compare and contrast their approach with e.g.:\\n\\n- Continuous Basis Pursuit\\n- Deconvolutional Networks- Charles Cadieu's PhD work\\n- The Statistical Inefficiency of Sparse Coding for Images (http://arxiv.org/abs/1109.6638)\\n \\n\\nPro(s)\\n- jitter-adaptive learning is an effective strategy for applying sparse coding \\n to temporal data, particularly EEG\\n \\nCon(s)\\n- paper would benefit from clarification of contribution relative to previous work\"}", "{\"review\": \"Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper?\"}" ] }
MQm0HKx20L7iN
Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering
[ "Boyi Xie", "Shuheng Zheng" ]
Large scale agglomerative clustering is hindered by computational burdens. We propose a novel scheme where exact inter-instance distance calculation is replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing (KLSH) hashed values. This results in a method that drastically decreases computation time. Additionally, we take advantage of certain labeled data points via distance metric learning to achieve a competitive precision and recall comparing to K-Means but in much less computation time.
[ "hashing", "agglomerative", "computational burdens", "novel scheme", "exact", "distance calculation", "distance", "klsh", "values" ]
reject
https://openreview.net/pdf?id=MQm0HKx20L7iN
https://openreview.net/forum?id=MQm0HKx20L7iN
ICLR.cc/2013/conference
2013
{ "note_id": [ "vpc3vyRo-2AFM", "Z9bz9yXn_F9nA" ], "note_type": [ "review", "review" ], "note_created": [ 1362080280000, 1362172860000 ], "note_signatures": [ [ "anonymous reviewer c8d7" ], [ "anonymous reviewer cce9" ] ], "structured_content_str": [ "{\"title\": \"review of Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering\", \"review\": \"This paper proposes to use kernelized locality-sensitive hashing (KLSH), based on a similarity metric learned from labeled data, to accelerate agglomerative (hierarchical) clustering. Agglomerative clustering requires, at each iteration, to find the pair of closest clusters. The idea behind this paper is that KLSH can be used to accelerate the search for these pairs of clusters. Also, the use of a learned, supervised metric should encourage the extraction of a clustering that reflects the class structure of the data distribution, even if computed from a relatively small subset of labeled data. Comparisons with k-means, k-means with distance learning, agglomerative clustering with KLSH and agglomerative clustering with KLSH and distance learning are reported.\\n\\nUnfortunately, I find this paper to be quite incremental. It essentially corresponds to a straightforward combination of 3 ideas 1) agglomerative clustering, 2) kernelized LSH [7] and 3) supervised metric learning [5].\\n\\nI find that details about the approach are also missing, about how to combine KLSH with agglomerative clustering. First, the authors do not explain how KLSH is leveraged to find the pair of closest clusters C_i and C_j. Are they iterating over each cluster C_i, finding its closest neighbour C_j using KLSH? This would correspond to a complexity linear in the number of clusters, and thus initially linear in the number of data points N. In a large scale setting, isn't this still too expensive? Are the authors doing something more clever? Algorithm 1 also mentions a proximity matrix P. Isn't its size N^2 initially? Again, in a large scale setting, it would be impossible to store such a matrix. The authors also do not specify how to compute distances between two clusters consisting in more than a single data point. I believe this is sometimes referred to as the linkage distance or criteria between clusters, which can be the min distance over all pairs or max distance over all pairs. What did the authors use, and how does KLSH allow for an efficient computation?\\n\\nMoreover, I'm not convinced of the benefit of the algorithm, based on the experiments reported in table 1. Indeed, agglomerative clustering with KLSH and distance learning does not dominate all other algorithm for both the precision and recall. In fact, it's doing terribly in terms of recall, compared to k-means. Also, it is not exactly clear to me what precision and recall correspond to in the context of this clustering experiment. I would suggest the author explicitly define what they mean here. I'm more familiar with adjusted rand index, as an evaluation metric for clustering...\\n\\nFinally, the writing of the paper is quite poor. I already mentioned that many details are lacking. Moreover, the paper is filled with typos and oddly phrased sentences.\"}", "{\"title\": \"review of Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering\", \"review\": \"This workshop submission proposes a method for clustering data which applies a semi-supervised distance metric to the data prior to applying kernelized locality-sensitive hashing for agglom\\nerative clustering. The intuition is that distance learning on a subset of data pairs will improve overall performance, and that the LSH-based clustering will be a better match for high-dimension data than k\\n-means. The method is evaluated on MNIST data.\\n\\nThere is little to no innovation in this paper, and, considering that there is no learned representation to speak of, it is of little interest for ICLR. The authors do not adequately explain the approach, an\\nd the experimental evaluation is unclear. The semi-supervised distance metric learning is not discussed fully, and the number and distribution of labeled data is not given.\\n\\nMoreover, the results are not promising. Although it is difficult to compare raw precision/recall numbers (F-measure or other metrics would be preferable), it is clear that the proposed method has much lower\\n recall than the kmeans baseline, with only moderate improvement in precision. The submission would also be improved by a visualization of the clustering obtained with the different methods.\"}" ] }
iKeAKFLmxoim3
Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity Estimation from Facial Images
[ "Ognjen Rudovic", "Maja Pantic", "Vladimir Pavlovic" ]
We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ranks, that impose temporal ordinal constraints on the static ranks (i.e., intensity scores). Our experimental results show that the proposed approach outperforms state-of-the art methods for sequence classification with ordinal data and other ordinal regression models. The approach performs significantly better than other models in terms of Intra-Class Correlation measure, which is the most accepted evaluation measure in the tasks of facial behaviour intensity estimation.
[ "pain intensity estimation", "facial images", "framework", "intensity scores", "novel", "kcorf" ]
reject
https://openreview.net/pdf?id=iKeAKFLmxoim3
https://openreview.net/forum?id=iKeAKFLmxoim3
ICLR.cc/2013/conference
2013
{ "note_id": [ "VTEO8hp3ad83Q", "lBM7_cfUaYlP1" ], "note_type": [ "review", "review" ], "note_created": [ 1362297780000, 1362186300000 ], "note_signatures": [ [ "anonymous reviewer 9402" ], [ "anonymous reviewer 0342" ] ], "structured_content_str": [ "{\"title\": \"review of Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity\\n Estimation from Facial Images\", \"review\": \"This extended abstract discusses a modification to an existing ordinal conditional random field model (CORF) so as to treat non-stationary data. This is done by making the variance in a probit model depend on the observations (x) and appealing to results on kernels methods for CRFs by Lafferty et al. The authors also introduce what they call dynamic ranks, but it is impossible to understand, from this write-up, how these relate to the model. No intuition is provide either. What is the equal sign doing in the definition of dynamic ranks?\\n\\nRegarding section 2, it is too short and impossible to follow. The authors should rewrite it making sure the mathematical models are specified properly and in full mathematical detail. All the variables and symbols should be defined. I know there are space constraints, but I also believe a better presentation is possible.\\n\\nThe experiments claim great improvements over techniques that do not exploit structure or techniques that exploit structure but which are not suitable for ordinal regression. As it is, it would be impossible to reproduce the results in this abstract. However, it seems that great effort was put into the empirical part of the work.\", \"some_typos\": \"\", \"abstract\": \"Add space after labels. Also, a novel should be simply novel.\", \"introduction\": \"in the recent should be in recent. Also, drop the a in a novel dynamic features.\", \"section_2\": \"Add space after McCullagh. What is standard CRF form? Please be precise as there are many ways of parameterizing and structuring CRFs.\", \"references\": \"laplacian should be Laplacian\"}", "{\"title\": \"review of Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity\\n Estimation from Facial Images\", \"review\": \"This paper seeks to estimate ordinal labels of pain intensity from\\nvideos of faces. The paper discusses a new variation of a conditional\\nrandom field in which the produced labels are ordinal values. The\\npaper's main claim to novelty is the idea of 'dynamic ranks', but it\\nis unclear what these are.\\n\\nThis paper does not convey its ideas clearly. It is not immediately\\nobvious why an ordinal regression problem demands a CRF, much less a\\nkernelized heteroscedastic CRF. Since I assume that each frame has a\\nsingle label, is the function of the CRF simply to impose temporal\\nsmoothness constraints? I don't understand the motivation for the\\nadditional aspects of this. The idea of 'dynamic ranks' is not\\nexplained, beyond Equation (2), which is itself confusing. For\\nexample, what does the equal sign inside the parentheses mean on the\\nleft side of Equation 2? It took me quite a while of looking at the\\nright-hand side of this equation to realize that it was defining a\\nset, but I don't understand how this relates to ranking or dynamics.\\nSection 3 seems to imply that the kernel is between the features of\\n6x6 patches, but this doesn't make sense to me if the objective is to\\nhave temporal smoothing.\\n\\nI found this paper very confusing. It does not provide many details\\nor intuition. In trying to resolve this confusion, I examined the\\nauthors' previous work, cited as [14] and [15]. These other papers\\nappear to contain most of the crucial details and assumptions that sit\\nbehind the present paper. I appreciate that this is a very short\\npaper, but for it to be a useful contribution it must be at least\\nsomewhat self contained. As it stands, I do not feel this is\\nachieved.\"}" ] }
zzKhQhsTYlzAZ
Regularized Discriminant Embedding for Visual Descriptor Learning
[ "Kye-Hyeon Kim", "Rui Cai", "Lei Zhang", "Seungjin Choi" ]
Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images.
[ "pairs", "discriminant", "visual descriptor", "images", "changes", "viewpoint", "resolution", "noise", "illumination", "representations" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=zzKhQhsTYlzAZ
https://openreview.net/forum?id=zzKhQhsTYlzAZ
ICLR.cc/2013/conference
2013
{ "note_id": [ "FBx7CpGZiEA32", "-7pc74mqcO-Mr", "Xf5Pf5SWhtEYT" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362287940000, 1362186780000, 1363779180000 ], "note_signatures": [ [ "anonymous reviewer 1e7c" ], [ "anonymous reviewer 39f1" ], [ "Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi" ] ], "structured_content_str": [ "{\"title\": \"review of Regularized Discriminant Embedding for Visual Descriptor Learning\", \"review\": \"The paper aims to present a method for discriminant analysis for image\\ndescriptors. The formulation splits a given dataset of labeled images\\ninto 4 categories, Relevant/Irrelevant and Near/Far pairs\\n(RN,RF,IN,IF). The final form of the objective aims to maximize the\\nratio of sum of distances of irrelevant pairs divided by relevant pairs. The distance metric is calculated at the lower dimensional projected space. The\\nmain contribution of this work as suggested in the paper is selecting\\nthe weighting of 4 splits differently from previous work.\\n\\nThe main intuition or reasoning behind this choice is not given,\\nneither any conclusive emprical evidence. In the only experiment that\\ncontains real images in the paper, data is said to be taken from\\nFlickr. However, it is not clear if this is a publicly available\\ndataset or some random images that authors collected. Moreover, for\\nthis experiment, one of the only two relevant methods are not included\\nfor comparison. Neither, any details of the training procedure nor the actual hyper parameters (\\beta) are explained in the paper.\"}", "{\"title\": \"review of Regularized Discriminant Embedding for Visual Descriptor Learning\", \"review\": \"This paper describes a method for learning visual feature descriptors that are invariant to changes in illumination, viewpoint, and image quality. The method can be used for multi-view matching and alignment, or for robust image retrieval. The method computes a regularized linear projection of SIFT feature descriptors to optimize a weighted similarity measure. The method is applied to matching and non-matching patches from Flickr images. The primary contribution of this workshop submission is to demonstrate that a coarse weighting of the data samples according to the disparity between their semantic distance and their Euclidean distance in SIFT descriptor space.\\n\\nThe novelty of the paper is minimal, and most details of the method and the validation are not given. The authors focus on the weighting of the sample pairs to emphasize both the furthest similar pairs and the closest dissimilar pairs, but it is not clear that this is provides a substantial gain.\"}", "{\"review\": \"We sincerely appreciate all the reviewers for their time and comments to this manuscript.\\nWe fully agree that it is really hard to find maningful contributions from this short paper, while we tried our best to emphasize them. As we have noted, the full version of this manuscript is currently under review in an international journal. In order to avoid violating the dual-submission policy of the journal, we could not include most of the details and empirical results - only the main idea and some simple examples could be remained in this workshop track submission.\\n\\nWe promise that all the details omitted in this version will be presented clearly in the workshop, e.g., the choice of the weighting of each split, the training dataset used in our experiments, and conclusive empirical comparisons.\\nFor example, we compared the image retrieval performance for landmark buildings in Oxford (http://www.robots.ox.ac.uk/~vgg/data/oxbuildings/) and Paris (http://www.robots.ox.ac.uk/~vgg/data/parisbuildings/). A nonlinear variant of LFDA implemented using deep belief networks (DBN) and a kernelized version of LDE (KDE) were compared to our method. In terms of the mean average precision (mAP) score, we observed significant improvements using our method (mAP: 0.678 on Oxford / 0.700 on Paris) over raw SIFT (0.611 / 0.649), KDE (0.656 / 0.673), DBN (0.662 / 0.678), under the same number of the learned features and the same size of visual vocabulary.\\n\\nThanks to all the reviewers again.\"}" ] }
3JiGJa1ZBn9W0
The Expressive Power of Word Embeddings
[ "Yanqing Chen", "Bryan P", "Rami Al-Rfou", "Steven Skiena" ]
We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results.
[ "embeddings", "quality", "expressive power", "word", "characteristics", "difference", "several", "several tasks", "different embeddings", "evaluation" ]
reject
https://openreview.net/pdf?id=3JiGJa1ZBn9W0
https://openreview.net/forum?id=3JiGJa1ZBn9W0
ICLR.cc/2013/conference
2013
{ "note_id": [ "0sZLsSijYosjR", "KcIrcVwbnRc0P", "-82Lr-SgHKmgJ", "224E22nDWH2Ia", "QrngQQuNMcQNZ", "ggT4SGBq4iS57", "7AoBA7CD4T7Fu" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1360886580000, 1362189720000, 1360855200000, 1362170040000, 1362416940000, 1362457800000, 1363573560000 ], "note_signatures": [ [ "Yanqing Chen" ], [ "Andrew Maas" ], [ "anonymous reviewer 406c" ], [ "anonymous reviewer af94" ], [ "anonymous reviewer 24e2" ], [ "Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena" ], [ "Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena" ] ], "structured_content_str": [ "{\"review\": \"Hello dear reviewer,\\n \\n Thank you for your well thought out review. We hope to have a draft which addresses some of your comments shortly.\\n\\nRegards\"}", "{\"review\": \"On the topic of comparing word representations, quality as a function of word frequency is something I've often found to be a problem. For example, rare words are often important for sentiment analysis, but many word representation learners produce poor representations for all but the top 1000 or so most frequent words. As this paper is focused on comparing representations, I think adding an experiment to assess quality of less common words would tremendously help the community understand the tradeoffs between word representation methods.\"}", "{\"title\": \"review of The Expressive Power of Word Embeddings\", \"review\": \"The paper proposes a method for evaluating real-valued vector embeddings of words based on several word and word-pair classification tasks. Though evaluation of such embeddings is an interesting and important problem, the experimental setup used it virtually impossible to draw any interesting conclusions.\\n\\nSome of the proposed evaluation tasks are a considerably less interesting than others. The Sentiment Polarity, for example, is certainly interesting and practically relevant, while the Regional Spellings task seems artificial. Moreover, performance on the latter is likely to be very sensitive to the regional distribution in the corpus to learn the embeddings. While identifying synonyms and and antonyms is an interesting problem, the formulation of the Synonyms and Antonyms task is too artificial. Instead of classifying a word pair as synonyms or antonyms or it would be far more interesting to perform three-way classification of such pairs into synonyms, antonyms, and neither. Note that there is little reason to think that embeddings learned by neural language models will capture the difference between antonyms and synonyms well because replacing a word with its antonym or synonym often has little effect on the probability of the sentence.\\n\\nThe experimental evaluation of the embeddings is unfortunately almost completely uninformative due to several confounding factors. The models used to produce the embeddings were trained on different datasets, with different vocabularies, context sizes, and number of passes through the data. Without controlling for these it is impossible to know the real reasons behind the differences in performance of the embeddings. All that can be concluded from the results in the paper is that some of the publicly available embeddings perform better than others on the proposed tasks. However, without controlling for the above factors, claims like 'Our work illustrates that significant differences in the information captured by each technique exist.' are unjustified.\\n\\nThe results obtained by reducing the amount of information in the embeddings are more informative. The fact that quantizing the real values in the embeddings does not drastically affect the classification performance is quite interesting. However, to make this result more convincing the authors need to control for the differences in the variance of the embeddings resulting from quantization. These differences are problematic because, as Turian at al. [14] showed, scaling embeddings by a constant can have a significant effect on classifier performance.\\n\\nThe results obtained using PCA to reduce the representation dimensionality are hard to interpret because the paper does not report the numbers for the linear and non-linear classifiers separately. This is a problem because reducing the input dimensionality has a much more drastic effect on the capacity of linear classifiers. Thus it is entirely possible that though the relevant information is stilled contained in the projected embeddings, the linear classifiers simply cannot take advantage of it.\\n\\nWhile the authors mention 4-fold cross validation and a development set, it is unclear whether the set was one of the folds. Does it mean that two folds were used for training, one for validation, and one for testing?\\n\\nIt is also unclear which method was used to produce Figure 3(b).\\n\\nThe probabilities in Table 2 are given in percent but the caption does not\\nstate that.\"}", "{\"title\": \"review of The Expressive Power of Word Embeddings\", \"review\": \"The submission considers 3 types of publicly-available distributed representations of words: produced by SENNA (Collobert and Weston, 11), the hierarchical bilinear language model (Mnih and Hinton, 2007) and Turian et al's (2010) implementation of the SENNA method. They compare performance of classifiers using the embeddings on 5 different tasks (e.g., sentiment polarity, noun gender).\\n\\nIn a way, this submission is similar to the work of Turian et al (2010) where different types of word representations are compared, however, here the authors just use available representations rather than induce their own. Consequently, they cannot shed the light on which factors affect the resulting performance the most (e.g., data size, loss used, regularization regime). Consequently, the paper may be useful when deciding which representations to download, but it does not provide sufficiently interesting insights on which methods are preferable. \\n\\nThe discussion of pair classification seems somewhat misleading. The authors attribute improved results on classifying pairs (e.g., where the first word or the second name in a given pair is masculine) w.r.t. classifying words (e.g., whether a name is masculine or feminine) to the lack of linear separability, whereas it is obvious that pairwise classification is always easier. Basically, as we know that there is a single word of each class in a pair, it is an ensemble prediction (one classifier using 1st word, another - the second one). So, I am not really sure what this result is actually telling us. \\n\\nI am also not sure how interesting the truncation experiments are. I would be much more interesting to see which dimensionality of the representation is needed, but mostly which initial representations, as it affects training performance (at least linearly). However, again, this is not really possible without retraining the models.\", \"pros\": [\"I believe that a high-quality comparison of existing methods for inducing word representations would be an important contribution.\"], \"cons\": [\"The paper compares downloaded representations rather than the methods. It does not answer the question which method is better (for each task)\", \"Some details of the experimental set-up are a little unclear. E.g., the paper mentions that logistic regression, an SVM with the linear kernel, and an SVM with the RBF kernel are used. However, it does not clarify which classifier was used and where. Were classifiers also chosen with cross validation?\", \"Some of the tasks (e.g., choosing British vs. American spelling) could benefit from using methods exploiting wider document context (rather than ngrams). There have been some methods for incorporating this information (see, e.g., Huang et al, 2012). It would be interesting to have such methods in the list.\"], \"minor\": [\"abstract: 'our evaluation shows that embeddings \\u2026 capture deep semantics', I am not sure what the authors mean by 'deep' semantics. However, I doubt that any of the considered tasks could be considered as such.\"]}", "{\"title\": \"review of The Expressive Power of Word Embeddings\", \"review\": \"This paper compares three available word vector embeddings on several tasks.\\n\\nThe paper lacks somewhat in novelty since the vectors are simply downloaded. This also makes their comparison somewhat harder since the final result is largely dependent on the training corpora.\\n\\nA comparison to the vectors of Huang et al 2012 would be interesting since they are very related.\\n\\nIt would be somewhat more interesting if the methods had been trained on the same dataset and on harder or real tasks such as NER (as done by Turian).\\n\\nFor a more semantic evaluation the datasets of WordSim353 or Huang et al could be used to compare to human judgments.\\n\\nIt would be very interesting to find if some of the dimensions are well correlated with some of the labels of the supervised tasks that are considered.\"}", "{\"review\": \"We thank the anonymous reviewers for their thoughtful comments. We have taken them into consideration, and have uploaded a revised manuscript to arXiv over the weekend. (it should be available in a few hours)\", \"specific_changes_include\": \"1. We have evaluated 3-class versions of our classifiers on the sentiment and synonym/antonym tasks.\\n2. We have reworked to focus more explicitly on term vs. pair tasks, and believe that this is a more clear presentation of our ideas\\n3. We have illustrated the convergence of linear vs. nonlinear classifiers as dimensions are reduced by PCA.\\n4. We have tried to modify specific language and tone that the reviewers found objectionable.\", \"we_have_some_specific_comments_to_each_of_our_reviewers\": \"Anonymous 406c, (1st reviewer, 2/14/2013)\\n\\n- Scaling of embeddings:\\nWe investigated scaling the embeddings to control the variance after PCA as recommended by Turian (2010). Results did not significantly change, and so we left the original in there. We have posted the corresponding plots with scaling:\\n(by embedding) http://goo.gl/wpXmD\\n(by task) http://goo.gl/hWYkX\\n\\nYou might also be interested in the fact that we ran all of our experiments on scaled and unscaled versions of the embeddings, but did not notice significant differences between them. We attribute both these results to the fact that we only used embeddings as features - Turian (2010) comments that his scaling approach is for mixing embedding features with words represented by binary features. \\n\\nAnonymous af94 (2nd reviewer, 3/1/2013)\\n\\n- Similar to Turian (2010)\\nWe\\u2019re quite different actually - Turian (2010) studied enhancing existing NLP tools with a variety of embeddings. This means that he combined the embeddings with existing features (from words, n-grams, or characters).\\nInstead, we use the embeddings as the sole features to understand their quality on their own. Moreover, we propose term/pair classification tasks to isolate the effect of context that influence the results of sequence tagging tasks.\\n\\n- Pairwise classification is easier because its an ensemble\\nYou raise an interesting point here. To explain our views: in our initial experiments we used the element-wise subtraction between two embeddings as features and they outperformed the single word version of the experiment. This seems to indicate that the embeddings encode information in the direction of the vector between two points in the space. We later modified the experiment to the one we report (it seemed more general at the time).\\n\\n- Classifiers? Which? Where?\\nWe use Linear SVM, Logistic Regression, RBF Kernel SVM on all our tasks and report the geometric mean of their results for each task. Each classifier result is obtained with 4-fold cross validation setup. \\n\\nWe also have some general comments on the nature of our tasks, and the decision to evaluate existing embeddings instead of training new ones\", \"on_the_nature_of_our_tasks\": \"Most of the previous evaluation on word embeddings has been done in the context of sequence tagging problems. While practical, we believe that this approach complicates the actual analysis of the features learned by neural language models. \\n\\nFor example, in a typical part of speech tagging setup the performance of the tagger over out-of-vocabulary words (without using character features) is much higher than random and might reach 70-80% accuracy. The decision, here, is clearly induced by the context. The influence of the context on the performance of classification, makes it harder to estimate the intrinsic quality of the word embeddings.\\n\\nWe agree that not all of our tasks map directly to traditional NLP tasks, but this is intended - each task illustrates one type of interesting behavior that can be found in the embedding space. Some behaviors are ones known to exist (e.g. plurality is a sub-component of Part-of-Speech tagging), one seems practical (sentiment), and one is just cool (e.g. synonym / antonym). The list is certainly not comprehensive, and we would appreciate additional suggestions.\", \"on_not_training_our_own_embeddings\": \"We are interested in NLP applications of feature learning, and we believe our results are valuable to consumers of such technology. We strove to evaluate the quality of what is available other researchers would use in their work.\\n\\nWe do agree it is hard to compare features produced under different conditions. It is a matter of fact that some of the embeddings will be better than others. The differences could be attributed to training specific factors (e.g. training time and datasets), or to the technique itself. In light of this difficulty we have tried to highlight that we are comparing the embeddings themselves, and not the techniques. We have toned down language contrary to this message.\\n\\nOn a final note, we would like to again thank our anonymous reviewers, and the greater ICLR community.\"}", "{\"review\": \"We thank the anonymous reviewers for the reference to Huang et al (2012). We have added the embeddings generated by Huang to our comparison, and we believe that they are an interesting addition.\\n\\nThe latest version of our submission can be found on arxiv.\"}" ] }
7hXs7GzQHo-QK
The Neural Representation Benchmark and its Evaluation on Brain and Machine
[ "Charles Cadieu", "Ha Hong", "Dan Yamins", "Nicolas Pinto", "Najib J. Majaj", "James J. DiCarlo" ]
A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines.
[ "evaluation", "brain", "neural representation benchmark", "machine", "representations", "neural representation", "benchmark", "analysis", "representational performance" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=7hXs7GzQHo-QK
https://openreview.net/forum?id=7hXs7GzQHo-QK
ICLR.cc/2013/conference
2013
{ "note_id": [ "bbQXGy3KgUrcP", "zzS1zF0bHj6V7", "RRN_zPMIpEzTn", "fD8BKQYEClkvP", "E6HmsiyOvphK_", "g05Ygn6IJZ0iX" ], "note_type": [ "comment", "review", "comment", "review", "review", "comment" ], "note_created": [ 1363649700000, 1362225300000, 1363649460000, 1362156600000, 1362226860000, 1363649760000 ], "note_signatures": [ [ "Charles Cadieu" ], [ "anonymous reviewer 4738" ], [ "Charles Cadieu" ], [ "anonymous reviewer b28a" ], [ "anonymous reviewer d59c" ], [ "Charles Cadieu" ] ], "structured_content_str": [ "{\"reply\": \"Thank you for your review and feedback. Here are some specific replies. (>-mark indicates quote from review)\\n\\n> * The two macaque subjects in the study by Majaj et al (2012) are unlikely to have been exposed to images of 3 object categories in the dataset: cars, planes or other animals such as cows and elephants. They may have been exposed to images from the 4 remaining object classes: faces, chairs, tables and fruits. By consequence, their V4 or IT cortical areas might not be trained to recognize, even after prolonged exposure, that the image of a car at an angle is still a car with a variation, and not another type of objects. The authors do raise the question whether the neural representation could be enhanced with increased exposure.\\n\\nSome additional information, not included in the paper:\\n\\t* Our data suggest that during the passive viewing paradigm there is no change in classifier performance trained on the early part of the recording vs. a later part of the recording. So we see no exposure dependent classifier improvement through time. \\n\\t* When examining per-category classifier performance, there is no obvious pattern between the two sets of categories you point out (cars/planes/animals vs. faces/chairs/tables/fruits).\\n\\t* The absolute performance of classifiers trained for Cars, or Planes or Animals does not seem to be significantly different from classifiers trained on the other categories.\\nIt remains an interesting question how the neural representational performance would change through training the animal to make the desired categorizations.\\n\\n> * The paper does mention that only about a hundred sites, on the cortex surface, are selected for the image categorization task, compared to all the tens of thousands of hidden units in the deep architecture. Some further discussion on the fairness of such a comparison would be welcome.\\n\\nOne important point is that the measure we have chosen, by measuring accuracy against complexity, allows us to compare representations of different dimensionality. How a representation is affected by subsampling depends on the properties of that representation, and it appears that the neural representation is quite robust to such subsampling. For example, we have attempted to estimate the convergence of our measure as we increase the number of recording sites, from within our sample. It has been somewhat surprising to us that this curve appears to asymptote so quickly, but of course this may be due to a sampling bias in the procedure.\\n\\nThere are a number of factors that may bias the neural results related to sampling such a small number of sites from the cortex. Here is a short discussion of some of these factors:\\n\\t* Neurons that are close together in cortical space are typically correlated. This indicates that the number of relevant dimensions is far less than the total number of neurons in cortex. This is in-line with the fast convergence we observe of our measurement with increasing the number of sites.\\n\\t* The placement of the grids, and the spacing between electrodes in the grid may affect our measurement.\\n\\t* We examine multi-unit activity, instead of individual neurons. At the least this indicates that we are recording from more neurons than the number of sites. We estimate that the number of total neurons we are recording from is about 5x times the number of multi-units (estimated using the spike count ratio between multi-units and single-units collected in V4 and IT in our lab). It is not clear how single units would change our result, if at all.\\n\\t* A point that may not be obvious is that an inherit property of electrophysiology is that we are \\u201cblind\\u201d to the neurons that do not fire during our experimental procedure. Therefore, we may be introducing a bias by recording from only active neurons and \\u201cdiscarding\\u201d neurons that are not active. This would also introduce an underestimate to the number of potential neurons we are effectively recording. Note that including such silent neurons would not affect our kernel analysis measure, just the estimate of the total number of neurons we recorded.\\n\\t* We have a hardware limitation that limits us to recording 128 sites at a time. For a given animal, we chose the top-128 best visually driven sites. \\u201cVisual drivenness\\u201d was measured with a separate pilot image set (see Rust and DiCarlo 2012 and Chou et al.). Roughly this measure is the mean across the top 10% of absolute per-image d-primes between an image and blank. The top 10% is cross-validated and the absolute value is necessary to account for inhibitory sites. This sampling bias may affect our measure by discarding neural activity not relevant for the task, thus increasing or KA-AUC estimate of the neural representation.\\n\\nOne final point, at this technological point in time, we are only able to record from 128 multi-unit sites simultaneously. We achieve the total number of sites through multiple recording sessions and multiple animals. Given these limitations, this dataset is cutting-edge in terms of the number of sites, the number of images presented, and the number of repetitions of each image, especially for IT cortex recordings.\\n\\n> The Gaussian kernel uses a single coefficient sigma for all the features (i.e., all the neurons / hidden units). On one hand, the neural data are taken on the visual cortex areas V4 and IT, where all the electrode sites are expected to measure information that is relevant for image recognition tasks in general, and the deep learning architectures were all trained on image classification tasks. On the other hand, not all the features (hidden units or electrode sites) are equally relevant, all the time, to all these tasks, but their values are all scaled nevertheless. Would it make sense to tune the individual per-feature sigma coefficients in the Gaussian kernel, as in Chapelle et al (2002) 'Choosing multiple parameters for support vector machines'?\\n\\nUnder our proposed methodology, modifying the representation, even by rescaling dimensions, during test time is not allowed. It would be reasonable to take a representation, apply the method in Chapelle et al (2002) on the training set, and thus create a new representation to be used during testing. This sounds like a good idea, and we are interested to see what else the community comes up with!\\n\\n> Are all the 5 references by Pinto et al. necessary for this paper?\\n\\nMost are, but we will remove the Cosyne 2010 abstract and the FG 2011 paper in the next revision, as these points are covered by the remaining references.\\n\\n> The authors do not indicate how the images from the dataset were split among the two monkeys (were they shown the same images, or two, different, random sets of images?) and how the neural observations from the different electrode sites (58 IT and 70 V4 sites on one monkey, 110 IT and 58 V4 sites on the other monkey) were grouped. My guess is that the same sets of images were shown to the two monkeys and that their responses were concatenated into IT or V4 matrices of site vs image. \\n\\nYou are correct. The same sets of images (all of them) were shown to each of the monkeys. The sites from each monkey IT cortex were concatenated, as were the sites from each monkey V4 cortex. We will update the text to clarify.\\n\\n> The authors do not need to mention the low computational complexity of the LSE loss (section 2.2). It is not more complex than the logistic loss and the real point is what they say about intra-class variance and inter-class variance. \\n\\nThanks for the feedback, we will update the text.\\n\\n> I do not fully understand the protocol in section 2.3, namely: 'we evaluate 10 pre-defined subsets of images, each taking 80% of the data from each variation level'. \\n\\nWe have updated the text to clarify.:\\nFor each variation level, we compute the kernel analysis curve and KA-AUC ten times, each time sampling 80% of the images with replacement. The ten samples for each variation level are fixed for all representations.\\n\\n> Is total dimensionality D equal to the number of samples n?\\n\\nYes. We indicate this now in the text.\\n\\nWill update arXiv posting shortly.\"}", "{\"title\": \"review of The Neural Representation Benchmark and its Evaluation on Brain and\\n Machine\", \"review\": \"This paper applies the methodology for 'kernel analysis of deep networks' (Montavon et al, 2011) to the neural code measured on two areas (V4 and IT) on the visual cortex of the macaque. It compares, on the same test set, the biological responses of V4 or IT (spike counts measured at about 100 electrode sites) to the hidden unit activations on the penultimate layer of several state-of-the-art deep learning architectures trained on large image datasets: the 10 million YouTube images and deep sparse auto-encoder paper by Le et al (2012), a convolutional network by Krizhevsky et al (2012), two papers by Pinto et al, one on the V1 model, another on the high throughput L3 model class and the unsupervised learning paper by Coates et al (2012).\\n\\nThe authors show that the IT area of the visual cortex seems to have a neural code that is more discriminative than the neural code of the V4 area for a 7-class image categorization task under variations of pose, position and scale. The authors also show that one supervised deep learning algorithm (Krizhevsky et al, 2012) even produces hidden layer representation that seems to outperform IT on that task.\\n\\n\\nPros, novelty and quality:\\n\\nThis paper is the first to apply the same method for evaluating feature representations of both the biological neural code (measured on the visual cortex of a primate) and of hidden unit activations in state-of-the-art methods for image classification. It provides an extensive comparison of the penultimate hidden layer of several deep learning algorithms, vs. the V4 and IT areas of the visual cortex of two macaques. As such, it provides insight into which algorithms make a good hidden representation of images.\\n\\nThe method for evaluating the feature representations is essentially non-parametric and provides a robust way to assess the complexity of the decision boundary. The kernel analysis method measures what percentage of the information coming from the sample images is required to successfully train a nonlinear Gaussian SVM-like classifier on the features (neural code or hidden unit activations), or a linear classifier in the dual space, for a simple image categorization task. The kernel PCA approach of keeping the top d eigenvectors of the kernel matrix in the dual solution is more robust than the cross-validation performance or than the number of support vectors, when the number of samples is small.\\n\\nThe paper is well written, the claims are well supported by the experiments. The metric used in this study is robust and the main results (IT vs V4, Krizhevsky et al 2012 vs IT on high variations) are statistically significant.\", \"cons\": [\"There are no cons per se in this paper, only limitations in the methodology (linked to the choice of the dataset) that could be improved upon by using a more extensive dataset. Most of these limitations have been preemptively mentioned and discussed by the authors in section 4.\", \"The two macaque subjects in the study by Majaj et al (2012) are unlikely to have been exposed to images of 3 object categories in the dataset: cars, planes or other animals such as cows and elephants. They may have been exposed to images from the 4 remaining object classes: faces, chairs, tables and fruits. By consequence, their V4 or IT cortical areas might not be trained to recognize, even after prolonged exposure, that the image of a car at an angle is still a car with a variation, and not another type of objects. The authors do raise the question whether the neural representation could be enhanced with increased exposure.\", \"The paper does mention that only about a hundred sites, on the cortex surface, are selected for the image categorization task, compared to all the tens of thousands of hidden units in the deep architecture. Some further discussion on the fairness of such a comparison would be welcome.\"], \"other_comments\": [\"The Gaussian kernel uses a single coefficient sigma for all the features (i.e., all the neurons / hidden units). On one hand, the neural data are taken on the visual cortex areas V4 and IT, where all the electrode sites are expected to measure information that is relevant for image recognition tasks in general, and the deep learning architectures were all trained on image classification tasks. On the other hand, not all the features (hidden units or electrode sites) are equally relevant, all the time, to all these tasks, but their values are all scaled nevertheless. Would it make sense to tune the individual per-feature sigma coefficients in the Gaussian kernel, as in Chapelle et al (2002) 'Choosing multiple parameters for support vector machines'?\", \"Are all the 5 references by Pinto et al. necessary for this paper?\"], \"minor_comments\": [\"The authors do not indicate how the images from the dataset were split among the two monkeys (were they shown the same images, or two, different, random sets of images?) and how the neural observations from the different electrode sites (58 IT and 70 V4 sites on one monkey, 110 IT and 58 V4 sites on the other monkey) were grouped. My guess is that the same sets of images were shown to the two monkeys and that their responses were concatenated into IT or V4 matrices of site vs image.\", \"The authors do not need to mention the low computational complexity of the LSE loss (section 2.2). It is not more complex than the logistic loss and the real point is what they say about intra-class variance and inter-class variance.\", \"I do not fully understand the protocol in section 2.3, namely: 'we evaluate 10 pre-defined subsets of images, each taking 80% of the data from each variation level'.\", \"Is total dimensionality D equal to the number of samples n?\"]}", "{\"reply\": \"Thank you for your review and feedback. Here are some comments on your suggestions:\\n\\n> The dataset used in the paper is composed of objects that are superposed to an independent background. While authors motivate their choice by controlling the factors of variations in the representation, it would be interesting to know whether machine learning or brain representations benefit most from this particular setting.\\n\\nAs you point out, we inevitably have to make trade-offs when designing our experiments. Your feedback on removing this controlled variation as an interesting question helps us to design future datasets for experiments.\\n\\n> This paper also raises the important question of what is the best way of comparing representations. One can wonder, for example, whether the reduced set of kernels considered here (Gaussian kernels with multiple scales) introduces some bias in favor of 'Gaussian-friendly' representations.\\n\\nWe agree that exploring the effect of the kernel choice is an interesting direction. We hope to include this in future work (possibly a longer journal version).\"}", "{\"title\": \"review of The Neural Representation Benchmark and its Evaluation on Brain and\\n Machine\", \"review\": \"The paper presents a benchmark for comparing representations of image data in brains and machines. The benchmark consists of looking at how the image categorization task is encoded in the leading kernel principal components of the representation, thus leading to an analysis of complexity and noise. The paper contains extensive experiments based on a representive set of state-of-the-art learning algorithms on the machine learning side, and real recordings of macaques brain activity on the neural side.\\n\\nThe research presented in this paper is well-conducted, timely and highly innovative. It is to my knowledge the first time, that representations obtained with state-of-the-art machine learning techniques for vision are systematically compared with real neural representations. The authors motivate the use of kernel analysis, by the inbuilt robustness to sample size being desirable in this heterogeneous setting.\\n\\nThe dataset used in the paper is composed of objects that are superposed to an independent background. While authors motivate their choice by controlling the factors of variations in the representation, it would be interesting to know whether machine learning or brain representations benefit most from this particular setting.\\n\\nThis paper also raises the important question of what is the best way of comparing representations. One can wonder, for example, whether the reduced set of kernels considered here (Gaussian kernels with multiple scales) introduces some bias in favor of 'Gaussian-friendly' representations. Also, as suggested by the authors, it could be that the way neural recordings are represented leads to underestimating their discriminative ability.\"}", "{\"title\": \"review of The Neural Representation Benchmark and its Evaluation on Brain and\\n Machine\", \"review\": \"This paper assesses feature learning algorithms by comparing their performance on an object classification task to that of Macaque IT and V4 neurons. The work provides a new dataset of images, an analysis method for comparing feature representations based on kernel analysis, and neural feature vectors recorded from V4 and IT neurons in response to these images. The authors evaluate a number of recent representational learning algorithms, and identify a recent approach based on deep convolutional networks outperforms V4 and IT neurons.\\n\\nThe paper is the first of its kind in providing easy tools to evaluate new representations against high level neural visual representations. It's comparison method differs from prior work by investigating representational learning with respect to a task, and hence is less influenced by potentially task-irrelevant idiosyncrasies of the neural response. The final conclusion reached, that recent models are beginning to surpass V4 and IT models, is very interesting. The authors have clearly explained their rationale behind the many design choices required, and their choices seem very reasonable.\\n\\nBecause of the many design choices to be made in reducing neural data to a feature representation (the use of multi units rather than singular units, time averaging, short presentation times--many of which are discussed by the authors in the text), the resulting V4/IT performance is likely a lower bound on the true performance. To surpass a lower bound is good news, but to be a useful metric for future research efforts, this lower bound would should lie above current models' performance. The fact that the Krizhevsky model already outperforms V4/IT means there is less reason to compare future representation algorithms using the proposed metric in its current form.\\n\\nThe kernel analysis metric asks whether neural and artificial data can achieve similar classification performance for a given model complexity, but this is a separate question from asking whether the neural representation is similar to the artificial representation; e.g., for a classification task, one could imagine many different pairwise similarity structures that would remain linearly separable (or said with the standard metaphor, both a bird and a plane can fly, but rely on different mechanisms). While some aspects of the neural response may be task irrelevant, it may be complementary to augment the KA-AUC approach with a similarity-based approach. This could also be computed from the collected data and would help map levels within a computational model to visual brain areas. In general a more extensive discussion of and contrast with the Kriegeskorte approach would be helpful.\"}", "{\"reply\": \"Thank you for your review and feedback. (>-mark indicates quote from review)\\n\\n> Because of the many design choices to be made in reducing neural data to a feature representation (the use of multi units rather than singular units, time averaging, short presentation times--many of which are discussed by the authors in the text), the resulting V4/IT performance is likely a lower bound on the true performance. To surpass a lower bound is good news, but to be a useful metric for future research efforts, this lower bound would should lie above current models' performance. The fact that the Krizhevsky model already outperforms V4/IT means there is less reason to compare future representation algorithms using the proposed metric in its current form.\\n\\nThese are good points. We did not know what to expect before we began measuring models and have been quite surprised by the performance of the Krizhevsky et al. model. Even given that this model surpasses IT, we still believe it is a relevant benchmark for algorithmic research. There are many interesting factors that go into the performance that will be worthwhile exploring, especially those related to efficiency (our opinion).\\n\\nFurthermore, given the assumed \\u201clower-bound\\u201d nature of the neural representation, we hope that this effort will encourage experimentalists to collect higher lower-bounds of the neural representation. Ideally, over time, we imagine a scenario similar to the progression in computer vision of increasingly challenging benchmarks of neural representation.\\n\\n> The kernel analysis metric asks whether neural and artificial data can achieve similar classification performance for a given model complexity, but this is a separate question from asking whether the neural representation is similar to the artificial representation; e.g., for a classification task, one could imagine many different pairwise similarity structures that would remain linearly separable (or said with the standard metaphor, both a bird and a plane can fly, but rely on different mechanisms). While some aspects of the neural response may be task irrelevant, it may be complementary to augment the KA-AUC approach with a similarity-based approach. This could also be computed from the collected data and would help map levels within a computational model to visual brain areas. In general a more extensive discussion of and contrast with the Kriegeskorte approach would be helpful.\\n\\nThis is a very good point. We think matching neural and model representations at ever increasing levels of detail is an important pursuit. Generally, we consider a sort of \\u201chierarchy of measures\\u201d of increasing specificity between neural responses and model responses. The one we have proposed here is relatively abstract, and task dependent by intention. The methods and approach of Kriegeskorte measures a, relatively, more constraining mapping between neural and model representations. As the current manuscript is longer than the conference organizers had hoped, we will reserve a more extensive discussion of the Kriegeskorte approach for a longer journal version of the manuscript. In ultimately choosing a measure, which level of abstraction one chooses to be satisfied with is largely dependent on one\\u2019s goals.\"}" ] }
PRuOK_LY_WPIq
Matrix Approximation under Local Low-Rank Assumption
[ "Joonseok Lee", "Seungyeon Kim", "Guy Lebanon", "Yoram Singer" ]
Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks.
[ "local", "assumption matrix approximation", "observed matrix", "matrix approximation", "common tool", "machine learning", "accurate prediction models", "recommendation systems", "text mining", "computer vision" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=PRuOK_LY_WPIq
https://openreview.net/forum?id=PRuOK_LY_WPIq
ICLR.cc/2013/conference
2013
{ "note_id": [ "JNpPfPeAkDJqK", "CkupCgw-sY1o7", "4eqD-9JEKn4Ea", "9QsSQSzMpW9Ac" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1363672020000, 1363319940000, 1362123600000, 1362191520000 ], "note_signatures": [ [ "simon bolivar" ], [ "Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer" ], [ "anonymous reviewer 4b7c" ], [ "anonymous reviewer 76ef" ] ], "structured_content_str": [ "{\"review\": \"It has already been mentioned above, but I checked the longer version of the document posted at http://www.cc.gatech.edu/~lebanon/papers/lee_icml_2013.pdf\\nand there really is not enough discussion of the huge previous literature on locally low rank representations, going back at least as far back as\", \"http\": \"//ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1671801\\n\\nand continuing with *many* recent works which represent data precisely as a weighted combination of local low rank matrices, for example, any of the papers on subspace clustering (especially if the model is restarted several times), or anchor graph embeddings of W. Liu et al, or the Locally Linear Coding of Yang et. al. \\n\\nThis does not even begin to touch the many manifold learning papers which explicitly model data via locally linear structures (which are necessarily locally low rank) and glue these together to get a parameterization of the data.\"}", "{\"review\": [\"We appreciate for both of your reviews and questions.\", \"Kernel width: we validated the kernel width experimentally. Specifically, we examined the following kernel types: Gaussian, triangular, and Epanchnikov kernels. We also experimented with the kernel width (0.6, 0.7, 0.8). We found that sufficiently large width (>= 0.8) performs well, probably due to the fact that the similarity between users or items is typically small.\", \"Distance measure: as the second reviewer mentioned, we first factorize M using standard incomplete SVD. We then compute the distance d based on the arc-cosine similarity between the rows of factor matrices U and V. We found that this approach performs better than defining the distance based on the rows and columns of the original rating matrix. This choice deems better probably due to the fact that many pairs of users (or item pairs) have almost no shared items because of the high sparsity of the rating matrix.\", \"Variance observed due to the sampling of anchor points: We omitted the discussion of this issue due to space constraints (3 pages).\", \"Effect of Nadaraya-Watson smoothing: we observed that the prediction quality is almost the same for anchor points and non-anchor points. In other words, the approximation due to the Nadaraya-Watson procedure does not seem to be the limiting factor.\", \"Hoelder continuity: this assumption is actually essential in our model since we smooth locally with respect to the distance d. We performed large deviation analysis which is also omitted due to the lack of space.\", \"We omitted references due to the strict page limit. We will add references to the most relevant work. Moreover, the long version of the submission includes substantial discussion and references of related work.\", \"Please feel free to reply us if you have further questions.\", \"Thank you.\"]}", "{\"title\": \"review of Matrix Approximation under Local Low-Rank Assumption\", \"review\": \"Matrix Approximation under Local Low-Rank Assumption\\n\\nPaper summary\\n\\nThis paper deals with low-rank matrix approximation/completion. To reconstruct a matrix element M_{i,j}, the proposed method performs a weighted low rank matrix approximation which considers a similarity metric between matrix coordinates. More precisely, the weighting scheme emphasizes reconstruction errors close to the element {i,j} to reconstruct. As a computational speedup, the authors perform the low rank approximation only on a small set of coordinates and approximate the reconstruction for any coordinate through kernel estimation.\\n\\nReview Summary\\n\\nThe core idea of the paper is interesting and could be helpful in many practical applications of low rank decomposition. The paper reads well and is technically correct. On the negative side, I feel that the availability of a meaningful similarity metric between coordinates should be discussed. The experimental section could be greatly improved. There is no reference to related work at all.\\n\\nReview \\n\\nIn many application of matrix completion, the low rank decomposition algorithm is here to circumvent the fact that no meaningful similarity metric between coordinate pairs is available. For instance, if such a metric is available in a collaborative filtering scenario, one would simply take the input (customer, item) fetch its neighbors and average its ratings. Your algorithm presuppose the availability of such a metric, could you discuss this core aspect of your proposal in the paper?\\n\\nFollowing on this idea, would you consider as baseline performing the Nadaraya-Watson kernel regression on the matrix itself and reports the result in your experimental section. This would be meaningful to quantify how much comes from the low rank smoothing and how much comes simply from the quality of the similarity metric.\\n\\nStill in the experimental section, \\n- would you consider validating the kernel width?\\n- discuss the influence of the L2 regularizer which is not even introduced in the previous sections\\n- define clearly the d you use. To me d(s,s') compare two coordinate pairs and I do not know how to relate it to the arccos you are using, i.e. what are x,y?\\n- could you measure the variance observed due to the sampling of anchor points and could you report whether the reconstruction error is greater further from anchor points?\\n- how does Nadaraya-Watson smoothing compare with respect to solving the low rank problem for each point?\", \"references\": [\"you should at least refer to weighted low rank matrix approximation (Srebro & Jaakkola, ICML-03). It would be good to refer to prior work on expanding low rank matric approximation, given how fertile this field has been in the Netflix prize days (Maximum Margin Matrix Factorizations, RBM for matrix completion...).\", \"Details along the text\", \"In Eq. 1, to unify notation, you could use the projection Pi here as well\", \"Hoelder continuity: I do not understand how it relate to the smoothing kernel approach defined below. I believe this sentence could be removed.\"]}", "{\"title\": \"review of Matrix Approximation under Local Low-Rank Assumption\", \"review\": \"Approximation and completion of sparse matrices is a common task. As popularized by the Netflix prize, there are many possible approaches, and combinations of different styles of approach can lead to better predictions than individual methods. In this work, local prediction and low-rank factorization are combined as one coherent method.\\n\\nThis is a short paper, with an interesting idea, and some compelling results. It has the appealing property that one can almost guess what is going to come from the abstract. My key question while reading was how locality was going to be defined: one of the goals of low-rank learning is finding a space in which to represent entities. The paper uses a simple distance measure to local support points. I'm not sure if a value is missing from one row and not another whether it is ignored, or counted as zero. I wonder if an approach that finds a low-rank or factor model fit and uses that to define distances for local modelling might work better. Potentially one could iterate, after fitting the model, get improved distances and refit.\\n\\nI find the large improvement over the Netflix prize winners surprising given the large effort invested over three years to get that result. Is one relatively simple method really sufficient to blow that away? I think open code and scrutiny would be required to be sure. (Honest mistakes are not unprecedented: http://arxiv.org/abs/1301.6659v2 ) It will be a great result if correct.\\n\\nI found the complete lack of references distracting. There is clearly related work in this area. Some of it is even mentioned, just with no formal citations. This is a workshop submission for light touch review, but citations seem like a basic requirement for any scientific document.\", \"pros\": \"neat idea, quick, to-the-point presentation.\", \"cons\": \"I'm suspicious of the results, and would like to see a reference section.\"}" ] }
BmOABAaTQDmt2
A Semantic Matching Energy Function for Learning with Multi-relational Data
[ "Xavier Glorot", "Antoine Bordes", "Jason Weston", "Yoshua Bengio" ]
Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature.
[ "data", "graphs", "relational learning", "crucial", "huge amounts", "many application domains", "computational biology", "information retrieval", "natural language processing" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=BmOABAaTQDmt2
https://openreview.net/forum?id=BmOABAaTQDmt2
ICLR.cc/2013/conference
2013
{ "note_id": [ "gL2tL3lwAfLw1", "ibXkikDckabeu", "fjenfiFhEZfLM" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1363968300000, 1362123900000, 1362379680000 ], "note_signatures": [ [ "Xavier Glorot, Antoine Bordes, Jason Weston, Yoshua Bengio" ], [ "anonymous reviewer 428a" ], [ "anonymous reviewer cae2" ] ], "structured_content_str": [ "{\"review\": \"We thank the reviewers for their comments.\\n\\nIt is true that our model should be compared with (Jenatton et al., NIPS12). This model has been developed simultaneously as ours, that's why it has not been included in the first version. We added this reference and their results (LFM model) in a revised version of our abstract (http://arxiv.org/abs/1301.3485v2). \\n\\nUnfortunately, SME is slightly outperformed by LFM on Kinships and Nations and equivalent on UMLS. Still, we believe that this work would make an interesting presentation at ICLR. First, together with LFM, SME is the only current method that can scale to large number of relation types (and both have been developed at the same time). The LFM paper actually displays an experiment on data with 5k relation types on which LFM and SME perform similarly. Second, contrary to all previous methods, SME models relation types as vectors, lying in the space as entities. From a conceptual viewpoint, this is powerful, since it models any relation types as a standard entity (and vice-versa). Hence, SME is the only method that could be applied on data for which any entity can also create relationships between other entities.\\n\\nWe now also compare our model with CANDECOMP-PARAFAC (CP), a standard tensor factorization method.\"}", "{\"title\": \"review of A Semantic Matching Energy Function for Learning with Multi-relational\\n Data\", \"review\": \"Semantic Matching Energy Function for Learning with Multi-Relational Data\\n\\nPaper Summary\\n\\nThis paper deals with learning an energy model over 3-way relationships. Each entity in the relation is associated a low dimensional representation and a neural network associate a real value to each representation triplet. The learning algorithm relies on an online ranking loss. Two models are proposed a linear model and a bilinear model.\\n\\nReview Summary\\n\\nThe paper is clear and reads well. Its use of the ranking loss function to this problem is an interesting proposition. It could give more details on the ranking loss and the training procedure. The experiments could also be more thorough. My main concern however is that references to a co-author own work have been omitted. This omission means that the authors pretend not to know that a model with better reported performance exists. This should be discouraged and I will recommend the rejection of the paper.\\n\\nReview\\n\\nThis paper is part of the recent effort of using distributed representation and various loss function for learning relational models. Papers focusing on this line of research include work from A. Bordes, J. Weston and Y. Bengio:\\n- A Latent Factor Model for Highly Multi-relational Data (NIPS 2012).\\n Rodolphe Jenatton, Nicolas Le Roux, Antoine Bordes and Guillaume Obozinski.\\n- Learning Structured Embeddings of Knowledge Bases (AAAI 2011). \\n Antoine Bordes, Jason Weston, Ronan Collobert and Yoshua Bengio.\\n\\nThe variations among this paper mainly involves \\n- model regularization (low rank, parameter tying...)\\n- loss function\\n\\nRegarding regularization, your proposition, Jenatton et al, and RESCAL are highly related. Basically your bilinear model seems to introduce a rank constrain on the 3D tensor representing all the relation {R_k, forall k} as in RESCAL notation. Basically your bilinar model decomposes R_k = (E_{rel,k} W_l) (W_r E_{rel,k})^T while the NIPS2012 model decomposes R_k as a linear combination of rank one matrices shared accross relations. Like Jenatton et al, you brake the symetry of the left and right relations.\\n\\nRegarding loss, RESCAL uses MSE, Jenatton et al uses logistic loss and you use a ranking loss.\\n\\nThis differences result in different AUCs. Jenatton et al is always better, RESCAL and your model are close. Given that Jenatton et al and RESCAL precede your submission. I feel it is necessary to check one thing at a time, i.e. training a model parameterized like RESCAL / Jenatton et al / yours with all three losses. This would give the best combination. This could give an empirical advantage to your ideas (either parameterization or ranking loss) over Jenatton et al.\\n\\nGiven that your model is worse in terms of AUC compared to Jenatton et al. I feel that you should at least explain why and maybe highlight some other advantages of your approach. I am disappointed that you do not refer to Jenatton et al: you know about this paper (shared co-author), the results on the same data are better and you do not even mention it.\\n\\nTypos/Details\", \"intro\": \"unlike in previous work -> put citation here.\\n2.2 (2) even if it remains low dimensional, nothing forces the dimension of ... -> barely understandable rephrase this sentence.\"}", "{\"title\": \"review of A Semantic Matching Energy Function for Learning with Multi-relational\\n Data\", \"review\": \"The paper proposes two functions for assigning energies to triples of\\nentities, represented as vectors. One energy function essentially\\nadds the vectors of the relations and the entities, while another\\nenergy function computes a tensor product of the relation and both\\nentities. The new energy functions appear to beat other methods.\\n\\nThe main weakness is in the relative lack of novelty. The paper\\nproposes a slightly different neural network architecture for\\ncomputing energies of object triplets from the ones that existed\\nbefore, but its advantage over these architectures hasn't been\\ndemonstrated conclusively. How does it compare to a simple tensor\\nfactorization? (or even a factorization that computes an energy with a\\n3-way inner product sum_i a_i R_i b_i? such a factorization embeds\\nentities and relations in the same space) Without this comparison, the\\nnew energy function is merely a 'new neural network architecture' that\\nis not shown to outperform other architectures. And indeed, the the\\nperformance of a simple Tensor factorization method matches the\\nresults of the more sophisticated factorization method that is\\nproposed here, on the datasets from [6] that overlap with the datasets\\nhere (namely, UML and kinship).\\n\\nIn general, new energy functions or architectures are worthwhile only\\nwhen they reliably improve performance (like the recently introduced\\nmaxout networks) or when they have other desirable properties, such as\\ninterpretability or simplicity. The energy function proposed here is\\nmore complex than a simple tensor factorization method which appears\\nto work just as well.\\n\\nPros \\n - New energy function, method appears to work well\\nCons\\n - The architecture is not compared against simpler architectures,\\n and there is evidence that the simpler architectures achieve\\n identical performance.\"}" ] }
rOvg47Txgprkn
Learnable Pooling Regions for Image Classification
[ "Mateusz Malinowski", "Mario Fritz" ]
From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter.
[ "model", "schemes", "learnable pooling regions", "image classification", "early hmax model", "spatial pyramid matching", "important role", "visual recognition pipelines", "spatial pooling" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=rOvg47Txgprkn
https://openreview.net/forum?id=rOvg47Txgprkn
ICLR.cc/2013/conference
2013
{ "note_id": [ "DtAvRX423kRIf", "4w1kwHXszr4D8", "uEhruhQZrGeZw", "6tLOt5yk_I6cd", "ttaRtzuy2NtjF", "ddaBUNcnvHrLK", "0IOVI1hnXH0m-", "L9s74sx8Ka9cP", "bYfTY-ABwrbB2", "xEdmrekMJsvCj", "mdD47o8J4hmr1" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "comment", "review", "review", "review" ], "note_created": [ 1361903280000, 1362138060000, 1361927280000, 1363741140000, 1360139640000, 1361922660000, 1362196620000, 1363751520000, 1363737660000, 1361914920000, 1360973580000 ], "note_signatures": [ [ "Yoshua Bengio" ], [ "anonymous reviewer 2426" ], [ "anonymous reviewer 45d8" ], [ "anonymous reviewer 45d8" ], [ "Yann LeCun" ], [ "Ian Goodfellow" ], [ "anonymous reviewer c1a0" ], [ "Mateusz Malinowski" ], [ "Mateusz Malinowski, Mario Fritz" ], [ "anonymous reviewer 45d8" ], [ "Mateusz Malinowski" ] ], "structured_content_str": [ "{\"review\": \"This is an interesting investigation and I only have remarks to make regarding the CIFAR-10 and CIFAR-100 results and the rapidly moving state-of-the-art (SOTA). In particular, on CIFAR-100, the 56.29% accuracy is not state-of-the-art anymore (thankfully, our field is moving fast!). There was first the result by Zeiler & Fergus using stochastic pooling, bringing the SOTA to 57.49% accuracy. Then, using another form of pooling innovation (max-linear pooling units with dropout, which we call maxout units), we brought the SOTA on CIFAR-100 to 61.43% accuracy. On CIFAR-10, maxout networks also beat the SOTA, bringing it to 87.07% accuracy. All these are of course without using any deformations.\\n\\nYou can find these in this arxiv paper (which appeared after your submission): http://arxiv.org/abs/1302.4389\\n\\nMaxout units also use linear filters pooled with a max, but without the positivity constraint. We found that using dropout on the max output makes a huge difference in performance, so you may want to try that as well.\"}", "{\"title\": \"review of Learnable Pooling Regions for Image Classification\", \"review\": \"This paper proposes a method to jointly train a pooling layer and a classifier in a supervised way.\\nThe idea is to first extract some features and then train a 2 layer neural net by backpropagation (although in practice they use l-bfgs). The first layer is linear and the parameters are box constrained and regularized to be spatially smooth. The authors propose also several little tricks to speed up training (divide the space into smaller pools, partition the features, etc.).\\n\\nMost relevant work related to this method is cited but some references are missing.\\nFor instance, learning pooling (and unappealing) regions was also proposed by Zeiler et al. in an unsupervised setting:\\nDifferentiable Pooling for Hierarchical Feature Learning\\nMatthew D. Zeiler and Rob Fergus\", \"arxiv\": \"1207.0151v1 (July 3, 2012)\\nSee below for other missing references.\\n\\nThe overall novelty is limited but sufficient. In my opinion the most novel piece in this work is the choice of the regularizer that enforces smoothness in the weights of the pooling. This regularization term is not new per se, but its application to learning filters certainly is.\\nThe overall quality is fair. The paper lacks clarity in some parts and the empirical validation is ok but not great.\\nI wish the authors stressed more the importance of the weight regularization and analyzed that part a bit more in depth instead of focussing on other aspects of their method which seem less exciting actually.\\n\\nPROS\\n+ nice idea to regularize weights promoting spatial smoothness\\n+ nice visualization of the learned parameters\\n\\nCONS\\n- novelty is limited and the overall method relies on heuristics to improve its scalability\\n- empirical validation is ok but not state of the art as claimed\\n- some parts of the paper are not clear\\n- some references are missing\", \"detailed_comments\": [\"The notation in sec. 2.2 could be improved. In particular, it seems to me that pooling is just a linear projection subject to constraints in the parameterization. The authors mentions that constraints are used just for interpretability but I think they are actually important to make the system 'less unidentifiable' (since it is the composition of two linear stages).\", \"Regarding the box constraints, I really do not understand how the authors modified l-bfgs to account for these box constraints since this is an unconstrained optimization method. A detailed explanation is required for making this method reproducible. Besides, why not making the weights non-negative and sum to one instead?\", \"The pre-pooling step is unsatisfying because it seems to defeat the whole purpose of the method. Effectively, there seem to be too many other little tricks that need to be in place to make this method competitive.\", \"Other people have reported better accuracy on these datasets. For instance,\", \"Practical Bayesian Optimization of Machine Learning Algorithms\", \"Jasper Snoek, Hugo Larochelle and Ryan Prescott Adams\", \"Neural Information Processing Systems, 2012\", \"There are lots of imprecise claims:\", \"convolutional nets before HMAX and SPM used pooling and they actually learned weights in the average pooling/subsampling step\", \"'logistic function' in pag. 3 should be 'softmax function'\", \"the contrast with the work by Le et al. on pag.4 is weak since although pooling regions can be trained in parallel but the classifier trained on the top of them has to be done afterwards. This sequential step makes the whole procedure less parallelizable.\", \"second paragraph of sec. 3.2 about 'transfer pooling regions' is not clear.\"]}", "{\"review\": \"PS. After reading some of the other comments, I see that I was wrong about the weights in the linear layer being possibly negative. I actually wasn't able to find the part of the paper that specifies this. I think in general the paper could be improved by being a little bit more straightforward. The method is very simple but it's difficult to tell exactly what the method is from reading the paper.\\n\\nI definitely agree with Yann LeCun that the smoothness prior is interesting and should be explored in more detail.\"}", "{\"review\": \"I'm not sure why the authors are claiming state of the art on CIFAR-10 in their response, because the paper doesn't make this claim and I don't see any update to the paper. The method does not actually have state of the art on CIFAR-10 even under the constraint that it follow the architecture considered in the paper. It's nearly as good as Jia and Huang's method but not quite as good.\\n\\nBack-propagation over the max operator may be possible, but how would you parameterize the max to include or exclude different input features? Each max pooling unit needs to take the max over some subset of the detector layer features. Since including or excluding a feature in the max is a hard 0/1 decision it's not obvious how to learn those subsets using your gradient based method.\", \"regarding_the_competitiveness_of_cifar_100\": \"This is not a very important point because CIFAR-100 being competitive or not doesn't enter much into my evaluation of the paper. It's still true that the proposed method beats Jia and Huang on that dataset. However, I do think that my opinion of CIFAR-100 as being less competitive than CIFAR-10 is justified. I'm aware that CIFAR-100 has fewer examples per class and that this explains why the error rates published on that dataset are higher. My reason for considering it less competitive is that the top two papers on CIFAR-100 right now both say that they didn't even bother optimizing their hyperparameters for that dataset. Presumably, anyone could easily get a better result on that dataset just by downloading the code for one of those papers and playing with the hyperparameters for a day or two.\"}", "{\"review\": \"As far as I can tell, the algorithm in section 2.2 (pooling + linear classifier) is essentially a 2-layer neural net trained with backprop, except that the hidden layer is linear with positive weights.\\nThe only innovation seems to be the weight spatial smoothness regularizer of section 2.3. I think this should be emphasized.\", \"question\": \"why use LBFGS when a simple stochastic gradient would have been simpler and probably faster?\\n\\nThe introduction seems to suggest that pooling appeared with [Riesenhuber and Poggio 2009] and [Koenderink and van Doorn 1999], but models of vision with pooling (even multiple levesl of pooling) can be found in the neo-cognitron model [Fukushima 1980] and in convolutional networks [LeCun et al. 1990, and pretty much every subsequent paper on convolutional nets].\\nThe origin of the idea can be traced to the 'complex cell' model from Hubel and Wiesel's classic work on the cat's primary visual cortex [Hubel and Wiesel 1962].\\n\\nYou might also be interested in [Boureau et al. ICML 2010] 'A theoretical analysis of feature pooling in vision algorithms'.\"}", "{\"review\": \"This is a follow-up to Yoshua Bengio's comment. I'm lead author on the paper that he linked to.\\n\\nOne reason that Zeiler & Fergus got good results on CIFAR-100 with stochastic max pooling and my co-authors and I got good results on CIFAR-100 with maxout is that we were both using deep architectures. I think there's room to ask the scientific question 'how well can we do with one layer, just by being more clever about how to do the pooling?' even if this doesn't immediately lead to better answers to the engineering question, 'how can we get the best possible numbers on CIFAR-100?' So it's important to evaluate Malinowski and Fritz's method in the context of it being constrained to using a single-layer architecture.\\n\\nOn the other hand, it's not obvious to me that Malinowski and Fritz's training procedure would generalize to deeper achitectures, since the current implementation assumes that the output of the pooling layer is connected directly to the classification layer. It would be interesting to investigate whether this strategy (and Jia and Huang's strategy) works for deeper architectures.\"}", "{\"title\": \"review of Learnable Pooling Regions for Image Classification\", \"review\": \"The paper presents a method for training pooling regions in image classification pipelines (similar to those that employ bag-of-words or spatial pyramid models). The system uses a linear pooling matrix to parametrize the pooling units and follows them with a linear classifier. The pooling units are then trained jointly with the classifier. Several strategies for regularizing the training of the pooling parameters are proposed in addition to several tricks to increase scalability. Results are presented on the CIFAR10 and CIFAR100 datasets.\\n\\nThe main idea here appears to be to replace the 'hard coded' average pooling stage + linear classifier with a trainable linear pooling stage + linear classifier. Though I see why this is natural, it is not clear to me why using two linear stages is advantageous here since the combined system is no more powerful than connecting the linear classifier directly to all the features. The two main advantages of competing approaches are that they can dramatically reduce dimensionality or identify features to combine with nonlinear pooling operations. It could be that the performance advantage of this approach (without regularization) comes from directly learning the linear classifier from all the feature values (and thus the classifier has lower bias).\\n\\nThe proposed regularization schemes applied to the pooling units potentially change the picture. Indeed the authors found that a 'smoothness' penalty (which enforces some spatial coherence on the pooling weights) was useful to regularize the system, which is quite similar to what is achieved using hand-coded pooling areas. The advantage is that the classifier is given the flexibility to choose other weights for all of the feature values while retaining regularization that is similar to hand-coded pooling. How useful this effect is in general seems worth exploring in more detail.\", \"pros\": \"(1) Potentially interesting analysis of regularization schemes to learn weighted pooling units.\\n(2) Tricks for pre-training the pooling units in batches and transferring the results to other datasets.\", \"cons\": \"(1) The method does not appear to add much power beyond the ability to specify prior knowledge about the smoothness of the weights along the spatial dimensions.\\n(2) The results show some improvement on CIFAR-100, but it is not clear that this could not be achieved simply due to the greater number of classifier parameters (as opposed to the pooling methods proposed in the paper.)\"}", "{\"reply\": \"As Table 1 shows our method gives similar results to Jia's method (79.6% and 80.17% accuracy). If we allow transfer between datasets, our method gives slightly better results (Table 5 reports 80.35% test accuracy for our method).\\n\\nWe could weight features with real-valued weights constrained to unit cube, and next use max-operator.\"}", "{\"review\": \"We thank all the reviewers for their comments.\\nWe will include suggested papers on related work and origins of pooling architectures as well as improvement on the state of the art that occurred in the meanwhile.\\nThe reviewers acknowledge our analysis of regularization schemes to learn weighted pooling units together with a regularizer that promotes spatial smoothness.\\n\\nOur work aims at replacing hand-crafted pooling stage in computer vision architectures ([1], [2], [3] and [4]) where the pooling is a way to reduce dimensionality of the features while preserving spatial information. Handcrafted spatial pooling schemes that operate on an image-level are still part of many state of the art architectures. In particular, recent approaches that aim at higher-level semantic representations (e.g. [3], [6]) follow this paradigm and are within the scope of our method. We therefore believe that our method will find wide applicability in those scenarios.\", \"anonymous_45d8\": \"We don't agree that CIFAR-100 is less-competitive as the state-of-the-art results are lower than CIFAR-10, moreover CIFAR-100 contains fewer examples per class for training and 10x more classes.\\nWe are not restricted to sum pooling as back-propagation over the max operator is possible. \\nWe use non-negativity constraint for the weights as Formula 5 shows.\\nSparsity constraint on the weights has no computational benefits at test time as the weighted sum ranges over the whole image. \\nConcerning the remarks about increased computation time, we would like to point out that computational costs are dominated by the coding procedure. The pooling stage - hand-crafted or learnt - is on the order of milliseconds per image.\\nThe connection between the matrix factorization of the weights of the softmax classifier and pooling stage is an interesting additional observation, however, the paper analyzes the regularization terms of the pooling operator and therefore regularization of the factorized weight matrix.\\nIn our work we want to make our architecture consistent with other computer vision architectures that use image-level pooling stage ([1], [2], [3], [4] and [5]) exploiting the shared representation among classes and computational benefits of this method.\", \"anonymous_2426\": \"The method produces state-of-the-art results, at the time of submission, on CIFAR-100 and state-of-the-art on both CIFAR-10 and CIFAR-100 given SPM architecture ([1], [2], [4]).\\nAs our results show, the smoothness constrain/regularization is the most crucial (Table 3), non-negativity constraint though increases the interpretability of the results. We use lbfgs with projection onto a unit box after every weights update. \\nAlthough some of our speed-ups to make the system more scalable are heuristic, they are appreciated e.g. by 'Anonymous c1a0' and share similarities with recently proposed approaches for scalable learning as we reference in the paper.\", \"anonymous_c1a0\": \"Increasing number of classification parameters in the SPM architecture ([1], [2], [4]) requires a bigger codebooks which increases the complexity of encoding step as every image patch has to be assigned to a cluster via triangle coding [4]. This would lead to a significant increase at test time. On the other hand, our architecture adds little overhead compared to SPM architectures at the test time.\\n\\nAnonymous 45d8 & Anonymous 2426:\\nThe pre-pooling step is pooling over a small neighborhood (over a 3x3 neighboring pixels), and therefore can be seen as form of weight sharing. This is a technical detail in order to reduce memory consumption and training time. This doesn't defy the main argument given in the paper as pooling is learnt over larger areas.\"}", "{\"title\": \"review of Learnable Pooling Regions for Image Classification\", \"review\": \"Summary:\\nThe paper proposes to replace the final stages of Coates and Ng's CIFAR-10 classification pipeline. In place of the hand-designed 3x3 mean pooling layer, the paper proposes to learn a pooling layer. In place of the SVM, the paper proposes to use softmax regression jointly trained with the pooling layer.\\n\\nThe most similar prior work is Jia and Huang's learned pooling system. Jia and Huang use a different means of learning the pooling layer, and train a separate logistic regression classifier for each class instead of using one softmax model.\\n\\nThe specific method proposed here for learning the pooling layer is to make the pooling layer a densely connected linear layer in an MLP and train it jointly with the softmax layer.\\n\\nThe proposed method doesn't work quite as well as Jia and Huang's on the CIFAR-10 dataset, but does beat them on the less-competitive CIFAR-100 benchmark.\", \"pros\": \"-The method is fairly simple and straightforward\\n-The method improves on the state of the art of CIFAR-100 (at the time of submission, there are now two better methods known to this reviewer)\", \"cons\": \"-I think it's somewhat misleading to call this operation pooling, for the following reasons:\\n1) It doesn't allow learning how to max-pool, as Jia and Huang's method does. It's sort of like mean pooling, but since the weights can be negative it's not even really a weighted average.\\n2) Since the weights aren't necessarily sparse, this loses most of the computational benefit of pooling, where each output is computed as a function of just a few inputs. The only real computational benefit is that you can set the hyperparameters to make the output smaller than the input, but that's true of convolutional layers too.\\n-A densely connected linear layer followed by a softmax layer is representationally equivalent to a softmax layer with a factorized weight matrix. Any improvements in performance from using this method are therefore due to regularizing a softmax model better. The paper doesn't explore this connection at all.\\n-The paper doesn't do proper controls. For example, their smoothness prior might explain their entire success. Just applying the smoothness prior to the softmax model directly might work just as well as factoring the softmax weights and applying the smoothness prior to one factor.\\n-While the paper says repeatedly that their method makes few assumptions about the geometry of the pools, their 'pre-pooling' step seems to make most of the same assumptions as Jia and Huang, and as far as I can tell includes Coates and Ng's method as a special case.\"}", "{\"review\": \"Our paper addresses the shortcomings of fixed and data-independent pooling regions in architectures such as Spatial Pyramid Matching [Lazebnik et. al., 'Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories', CVPR 2006], where dictionary-based features are pooled over large neighborhood. In our work we propose an alternative data-driven approach for the pooling stage, and there are three main novelties of our work.\\n \\nFirst of all, we base our work on the popular Spatial Pyramid Matching architectures and generalize the pooling operator allowing for joint and discriminative training of both the classifier together with the pooling operator. The realization of the idea necessary for training is essentially an Artificial Neural Network with dense connections between pooling units with the classifier, and the pooling units connected with the high-dimensional dictionary-based features. Therefore, back-propagation and the Neural Network interpretation should rather be considered here as a tool to achieve joint and data-dependent training of the parameters of the pooling operator and the classifier. Moreover, our parameterization allows for the interpretation in terms of spatial regions. The proposed architecture is an alternative to another discriminatively trained architecture presented by Jia et. al. ['Beyond spatial pyramids: Receptive field learning for pooled image features' CVPR 2012 and NIPS workshop 2011] outperforming the latter on the CIFAR-100 dataset.\\n\\nSecondly, as opposed to the previous Spatial Pyramid Matching schemes, we don't constrain the pooling regions to be the identical for all coordinates of the code.\\n \\nLastly, as you've said, we investigate regularization terms. The popular spatial pyramid matching architectures which we generalize in this paper are typically used to pool over large spatial regions. In combination with our code-specific pooling scheme this leads to a large number of parameters that call for regularization. In our investigations of different regularizers it turns out that a smoothness regularizer is key to strong performance for this type of architecture on CIFAR-10 and CIFAR-100 datasets.\", \"concerning_lbfgs_vs_sgd\": \"We have chosen LBFGS out of convenience, as it tends to have fewer parameters.\\n\\nThanks for pointing out missing references.\"}" ] }
5Qbn4E0Njz4Si
Hierarchical Data Representation Model - Multi-layer NMF
[ "Hyun-Ah Song", "Soo-Young Lee" ]
Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation.
[ "complex data", "network", "feature hierarchies", "understandable manner", "hierarchical feature", "nmf", "nmf understanding", "underlying structure" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=5Qbn4E0Njz4Si
https://openreview.net/forum?id=5Qbn4E0Njz4Si
ICLR.cc/2013/conference
2013
{ "note_id": [ "Oel6vaaN-neNQ", "ZIE1IP5KlJTK-", "-B7o-Yy0XjB0_", "APRX62OnXa6nY", "CC-TCptvxlrvi" ], "note_type": [ "review", "review", "comment", "comment", "comment" ], "note_created": [ 1362279120000, 1362127980000, 1363334160000, 1363255140000, 1363255380000 ], "note_signatures": [ [ "anonymous reviewer 7984" ], [ "anonymous reviewer d1c1" ], [ "Hyun-Ah Song" ], [ "Hyun-Ah Song" ], [ "Hyun-Ah Song" ] ], "structured_content_str": [ "{\"title\": \"review of Hierarchical Data Representation Model - Multi-layer NMF\", \"review\": \"The paper proposes to stack NMF models on top of each other. At each level, a non-linear function of normalized decomposition coefficients is used and decomposed using another NMF.\\n\\nThis is essentially an instance of a deep belief network, where the unsupervised learning part is done using NMF, which, to the best of my knowledge had not been done before.\\n\\nThe new method is then applied to document data where a hierarchy of topics seems to be discovered. Applications are also shown on reconstructing digits.\\n\\nThe extended abstract however does not give many details on all the specifics of the method.\", \"comments\": \"-It would have been nice (a) to relate the hierachy to existing topic models [A,B], and (b) to see more topics.\\n-On Figure 2, why are reconstruction errors decreasing with the number of features? \\n-On the digits, the differences between shallow and deep networks are not clear.\\n\\n[A] D. Blei, T. Griffiths, and M. Jordan. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57:2 1\\u201330, 2010. \\n\\n[B] R. Jenatton, J. Mairal, G. Obozinski, F. Bach. Proximal Methods for Hierarchical Sparse Coding. Journal of Machine Learning Research, 12, 2297-2334, 2011.\", \"pros\": \"-Interesting idea of stacking NMFs.\", \"cons\": \"-Experimental results are interesting but not great. What is exactly achieved is not clear.\"}", "{\"title\": \"review of Hierarchical Data Representation Model - Multi-layer NMF\", \"review\": \"This paper proposes a multilayer architecture based upon stacking non-negative matrix factorization modules and fine-tuning the entire architecture with reconstruction error. Experiments on text classification and MNIST reconstruction demonstrate the approach.\\n\\nDuring layer-wise initialization of the multilayer architecture NMF is performed to obtain a low-rank approximation to the input. The output of an NMF linear transform passes through a nonlinearity to form the input to the subsequent layer. These nonlinear outputs of a layer are K = f(H) where f(.) is a nonlinear function and H are linear responses of the input. During joint network training, a squared reconstruction error objective is used. Decoding the final hidden layer representation back into the input space is performed with explicit inversions of the nonlinear function f(.). Overall, the notation and description of the multi-layer architecture (section 3) is quite unclear. It would be difficult to implement the proposed architecture based only upon this description\\n\\nExperiments on Reuters text classification and MNIST primarily focus on reconstruction error and visualizing similarities discovered by the model. The text similarities are interesting, but showing a single learned concept does not sufficiently demonstrate the model's ability to learn interesting structure. MNIST visualizations are again interesting, but the lack of MNIST classification results is strange given the popularity of the dataset. Finally, no experiments compare to other models e.g. simple sparse auto-encoders to serve as a baseline for the proposed algorithm.\", \"notes\": [\"-The abstract should be included as part of the paper\", \"Matlab notation is paragraph 2 of section 2 is a bit strange. Standard linear algebra notation (e.g. I instead of eye) is more clear in this case\", \"'smoothen' -> smooth or apply smoothing to\"], \"summary\": [\"A stacking architecture based upon NMF is interesting\", \"The proposed architecture is not described well. Others would have difficulty replicating the model.\", \"Experiments do not compare to sufficient baselines or other layer-wise feature learners.\", \"Experiments and visualizations do not sufficiently demonstrate the claim that NMF-based feature hierarchies are easier to interpret\"]}", "{\"reply\": \"- Points on the con that experimental results are not great:\\nWhen we refer to Figure 2 in the paper, the proposed hierarchical feature extraction method results in much better classification and reconstruction performance, especially for small number of features.\\nIt can be interpreted that, by taking hierarchical stages in reducing dimensions, our proposed method successfully finds more meaningful and helpful features in aspect of representing the data, compared to reducing dimensions at one step.\"}", "{\"reply\": \"- Description of proposed architecture:\\nSorry about insufficient description of the proposed architecture! We had to fit all of the content into 3 pages.. We added more details on the architecture of our network, which includes actual computations involved in implementing the network in Appendix.\\n\\n- Comparison with other baselines:\\nIn this paper, we concentrated on proving our hypothesis that extending NMF into several layers will discover feature hierarchies present in the data, and provide better and meaningful features, compared to the standard shallow one (self-comparison). Since we wanted to observe the behavioral change when extended into several layers, we solely compared the result before and after stacking layers. We regarded comparing the performance with other baselines or layer-wise feature learners as not so important because other baselines do not provide us intuitive demonstration of the feature hierarchies. However, like your comment, we think that it may be meaningful to compare with the other feature learners and see if our proposed network can function as a simple feature extraction algorithm (without considering discovery of feature hierarchies).\\n\\n- Experiments and visualizations do not sufficiently demonstrate the claim that NMF-based feature hierarchies are easier to interpret:\\nThrough our proposed research, we wanted to prove that hierarchical learning with NMFs can present intuitive feature hierarchies by learning feature relationships across the layers.\\nWe think our proposed network provides meaningful feature hierarchies compared to other networks (not necessarily easier interpretation) because:\\na) compared to other feature learning networks that does not restrict the sign of the data, our proposed network intuitively represents learned features with the non-negativity property.\\nb) while shallow feature learning networks that is able to demonstrate features intuitively (ex. by restricting non-negativity constraint, NMF or other topic models), it learns relationships of features which develop into hierarchies.\\nc) although some recent topic models provides topic hierarchies in intuitive manner, the application is restricted to only document data. However, our proposed network can be applied to any types of data with non-negative signs, not just documents (it can be used to learn underlying feature hierarchies present in image, as well.)\\nWith the experimental results in this paper, we are aware of the fact that interpretation of sub-class topics may not seem clear; we showed how words in the first layer features differ slightly from each other in terms of content, but develop into the same broad topic class. However, we believe that this has shown a good signal of potential for development. In order to reinforce our claim and strongly support the function of the proposed network, we would like to look for a text document set that provides ground-truth label of sub-categories as well.\\n\\n- We included abstracts in the revised version, and corrected the notes you made above! Thanks!\\n\\n- Remainder: The revised version will be available at Fri, 15 Mar 2013 00:00:00 GMT.\"}", "{\"reply\": [\"Details on the specifics of the method:\", \"Sorry for the insufficient explanations on the method. We had to fit into 3 page limit.. We added detailed explanation of the method and computation in Appendix.\", \"Hierarchies by topic models [A,B]:\", \"Thanks for the recommendation! In this paper, we focused on the general property of the hierarchical learning of the proposed network, regardless of types of data set (whether it is document set or image set, etc). This is the reason why we did not compare our result with any of other topic model result. However, we think it is meaningful to carefully observe how it works in different types of data sets in more detail. As furtherwork, we would like to look into more details on the function of our proposed network in terms of document dataset application by comparing the result with [A,B].\", \"On Figure 2, the x-axis represents the number of features, and also, the dimensions provided for data representation in H. If we increase the number of features provided for learning, the network learns features separately so that it can come up with more exact reconstruction of the original data. (For example, if we restrict the number of feature to one, the network has to cram the essential parts necessary for data representation into one, and it is hard to represent exactly what we want using just one feature or building block. However, if we provide sufficient number of features, network learns several essential parts separately, which means more number of more accurate building blocks and it will be easier to represent data by making use of the necessary features or building blocks, which may lead to more accurate reconstruction of the data) This is the reason why reconstruction error decreases with respect to increasing number of features.\", \"MNIST dataset difference between shallow and deep network:\", \"Sorry for the small image! Instead of showing whole 0-9 MNIST digit reconstruction, we enlarged and focused on the image to a few example of digits that show clear difference between the shallow and deep network.\", \"What is achieved by the research?\"], \"there_are_mainly_two_contributions_of_our_work\": \"By taking step-by-step approach in learning of features using NMFs, 1. we discovered the relationships between low level features and high level features, and intuitively demonstrated class hierarchies present in the data, and additionally, 2. we learned more meaningful features which lead to better distributed data representation, which results in better classification and reconstruction performance (provided insufficient dimensions for data representation).\\nBy extending NMF into several layers, we proposed a way to discover intuitive concept hierarchies by learning relationships between features, regardless of types of data set (while topic models are focused on revealing concept hierarchies of document set only, our proposed network can handle any types non-negative data sets.). With the experiments of comparison with the shallow network, we also proved that taking step-by-step approach in learning benefits feature learning as well. (this will be supporting evidence of further application of the proposed network)\\n \\n- Remainder: The revised version will be available at Fri, 15 Mar 2013 00:00:00 GMT.\"}" ] }
4UGuUZWZmi4Ze
Feature grouping from spatially constrained multiplicative interaction
[ "Felix Bauer", "Roland Memisevic" ]
We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter maps follow naturally from training the model on image pairs. The model also helps explain why square-pooling models yield feature groups with similar grouping properties. Experimental results on synthetic image transformations show that spatially constrained gating is an effective way to reduce the number of parameters and thereby to regularize a transformation-learning model.
[ "model", "feature", "multiplicative interaction feature", "multiplicative interaction", "relationships", "images", "gated boltzmann machine", "units", "space", "connections" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=4UGuUZWZmi4Ze
https://openreview.net/forum?id=4UGuUZWZmi4Ze
ICLR.cc/2013/conference
2013
{ "note_id": [ "D3uj2h4TUE2ce", "VlvAlDIDt_Sa0", "yTWI4b3EnB4CU", "ah5kV2s_ULa20" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1363179420000, 1362171300000, 1361968140000, 1362214680000 ], "note_signatures": [ [ "Felix Bauer" ], [ "anonymous reviewer ea89" ], [ "anonymous reviewer 43a2" ], [ "anonymous reviewer cce5" ] ], "structured_content_str": [ "{\"review\": \"Points raised by reviewers:\", \"reviewer_43a2\": \"(1) Good classification of rotations and scale are reported in Table 1, unfortunately these appear to be on toy, not natural, images. Impressive grouping of complex transformations such as translations and rotations are shown in Figure 3.\\n(2) While the gabors learned on natural image patches are interesting it is hard to judge them without knowing how large they are. These details seem to be omitted from the paper.\\n(3) It is not immediately obvious what applications would benefit from this type of model. It also seems like it could be relatively expensive computationally, and there was no mention of timing versus the standard gated Boltzmann machine model.\", \"reviewer_ea89\": \"(4) Please write the formulas for the full model that you train, not just the encoding. Even though they exist in other papers, they are not so complicated to write them down here.\\n(5) You say that the pinwheel patterns don't appear in rodents because they don't have a binocular vision. However you haven't actually obtained the pinwheels from binocularity but from video.\\n(6) The formula (9) is unclear and should be fixed. For example, how come the f index appears only once?\\n(6a) Figure 3: What is index of parameter set? In text you talk about different datasizes - where are the results for these?\", \"reviewer_cce5\": \"(7a) * Targets somewhat of a 'niche audience'; may be less accessible to the general representation learning community\\n(7b) * Presents a lot of qualitative but not quantitative results\\n(8) Fig 2: It's difficult to read/understand the frequency scale (left, bottom); it seems that frequency has been discretized; what do these bins represent, and how are they constructed?\\n(9) In section 2.1, could you be more explicit about what you mean by 'matching' the input filters (and the output filters). I assume the matching is referring to the connectivity by connecting filters to mapping units? Matching comes up again in Section 3, so it would help to clarify this early on.\\n(10) Check equation 8: what happened to C - should it not show up there?\", \"our_response\": \"We thank the reviewers for their comments and suggestions. We submitted an updated version of the paper in which we address these points:\\n\\n(1, 7b) We agree. While the toy results do suggest that the reduction in the number of parameters caused by grouping helps generalize, real world applications like activity recognition work much better with local receptive fields and pooling, which we feel is much too complicated for a first paper in this direction.\\n(2) We updated the paper to include a more detailed description of the datasets and experiments.\\nX (3) We included a discussion of computational complexity. The complexity scales with the square of the group size (which is typically small, like 5). This is not a huge increase in complexity because without grouping the model has to account for the equivalent number of products by replicating filters.\\n(4) We included the equations as suggested.\\n(5) This is a good point, and we now clarify this in the updated version of the paper. Binocular stimuli, like video, are dominated by local translation, and the exact same biological mechanisms have traditionally been assumed to model both (multiview complex cells). A simple GBM has in fact been applied to binocular 3D inference tasks in the past (eg. 'Stereopsis via Deep Learning', Memisevic, Conrad, 2011).\\n(6) We fixed this in the updated version.\\n(6a): We now describe the figure in more detail in the updated version. (Along the x-axis we show models with varying numbers of factors and mapping units)\\n(6a): We now describe the figure in more detail in the updated version. (Along the x-axis we show models with varying numbers of factors and mapping units)\\n(7a): While the model and experiments we discuss in the paper are very specific and technical, we see the main contribution of the paper in explaining concisely why square-pooling, group sparse coding and topographic feature models learn to group frequency, orientation and position and not phase. While it is well-known that they show this behaviour we show that thinking of squares as representing transformations can explain why. We rewrote the text to make this point clearer.\\n(8) They are DFT bins: We generate the plots by performing a 2d FFT on the learned Fourier filters (note that, since we train on translations in this experiment, we get Fourier components rather than Gabor features). We clarified this in the updated version.\\n(9) We use 'matching' synonymous with 'multiplying'. Indeed, this means that 'matched' filters are those whose product gets fed into a mapping unit (along with other products). We clarified this in the updated version.\\n(10) Yes. We fixed this in the updated version.\"}", "{\"title\": \"review of Feature grouping from spatially constrained multiplicative interaction\", \"review\": [\"The model presented in this paper is an extension of a previous model that extracts features from images, and these featuers are multiplied together to extract motion information (or other relation between two images). The novelty is to connect each feature of one image to several features of other image. This reuses the features. Futher, these connections are made in groups, and the features in the group will learn to have related properties. With overlaping groups one obtains pinwheel patterns observed in visual cortex. This is a different mechanism then previous ones.\", \"Please write the formulas for the full model that you train, not just the encoding. Even though they exist in other papers, they are not so complicated to write them down here.\", \"You say that the pinwheel patterns don't appear in rodents because they don't have a binocular vision. However you haven't actually obtained the pinwheels from binocularity but from video.\", \"The formula (9) is unclear and should be fixed. For example, how come the f index appears only once?\", \"Figure 3: What is index of parameter set? In text you talk about different datasizes - where are the results for these?\"]}", "{\"title\": \"review of Feature grouping from spatially constrained multiplicative interaction\", \"review\": \"This paper introduces a group-gated Boltzmann machine for learning the transformations between a pair of images more efficiently than with a standard gated Boltzmann machine. Experiments show the model learns phase invariant complex cells-like units grouped by frequency and orientation. These groups can also be manipulated to include overlapping neighbors in which case the model learns topographic pinwheel layouts of orientation, frequency and phase. The paper also mentions how the model is related to squared-pooling used in other learning methods.\\n\\nPros\\nInteresting idea to add an additional connectivity matrix to the factors to enforce grouping behavior in a gated RBM. This is shown to be beneficial for learning translation invariant groups which are stable for frequency and orientation.\\n\\nGood classification of rotations and scale are reported in Table 1, unfortunately these appear to be on toy, not natural, images. Impressive grouping of complex transformations such as translations and rotations are shown in Figure 3.\\n\\nFigure 2 is a great figure. Clearly shows how a GRBM can represent all forms of frequency and orientation and combine these to represent translations. In general the paper was well written and has good explanatory figures.\\n\\nCons\\nWhile the gabors learned on natural image patches are interesting it is hard to judge them without knowing how large they are. These details seem to be omitted from the paper.\\n\\nIt is not immediately obvious what applications would benefit from this type of model. It also seems like it could be relatively expensive computationally, and there was no mention of timing versus the standard gated Boltzmann machine model.\", \"novelty_and_quality\": \"This extension to gated Boltzmann machines is novel in that it allows grouping of features and increases the modelling power because the model no longer needs multiple feature to do simple translations. The paper was well written overall.\"}", "{\"title\": \"review of Feature grouping from spatially constrained multiplicative interaction\", \"review\": \"This paper proposes a novel generalization of the Gated Boltzmann Machine. Unlike a traditional GBM, this model is constrained in a way that hidden units that are grouped together (groupings defined a priori) can gate each other's connections. The model is shown to produce group structure in the learned representations (topographic feature maps) as well as frequency and orientation consistency of the filters within each group.\\n\\nThis paper is well written, presents a novel learning paradigm and is of interest to the representation learning community, especially those researchers interested in higher-order RBMs and transformation learning.\", \"positive_points_of_the_paper\": [\"Novelty\", \"Readability\", \"Treatment of an area (transformation learning) that is, in my opinion, worthy of more attention in the representation learning community\", \"Makes connections to the 'group sparse coding' literature (where other papers have proposed encouraging the squared responses of grouped filters to be similar)\", \"Makes a good effort to explain the observed phenomena (e.g. in discussing the filter responses)\"], \"negative_points_of_the_paper\": [\"Targets somewhat of a 'niche audience'; may be less accessible to the general representation learning community\", \"Presents a lot of qualitative but not quantitative results\", \"Overall, it's a nice paper.\"], \"some_specific_comments\": \"\", \"fig_2\": \"It's difficult to read/understand the frequency scale (left, bottom); it seems that frequency has been discretized; what do these bins represent, and how are they constructed?\\n\\nIn section 2.1, could you be more explicit about what you mean by 'matching' the input filters (and the output filters). I assume the matching is referring to the connectivity by connecting filters to mapping units? Matching comes up again in Section 3, so it would help to clarify this early on.\", \"check_equation_8\": \"what happened to C - should it not show up there?\"}" ] }
TT0bFo9VZpFWg
Big Neural Networks Waste Capacity
[ "Yann Dauphin", "Yoshua Bengio" ]
This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required.
[ "big neural networks", "optimization", "capacity", "article", "failure", "added capacity", "past research suggest", "returns", "size" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=TT0bFo9VZpFWg
https://openreview.net/forum?id=TT0bFo9VZpFWg
ICLR.cc/2013/conference
2013
{ "note_id": [ "ChpzCSZ9zqCTR", "MvRrJo2NhwMOE", "PPZdA2YqSgAq6", "5w24FePB4ywro", "CqF6fhZ9QLCrY", "JqnQqLEIc6q5e", "IyZiWpNTixIVv", "wjvpl_b23glfA", "URyDlbBNoEUIn" ], "note_type": [ "review", "review", "review", "review", "comment", "comment", "comment", "comment", "comment" ], "note_created": [ 1361967300000, 1362019740000, 1362402480000, 1362373200000, 1363311720000, 1363644660000, 1363311660000, 1363381980000, 1363311600000 ], "note_signatures": [ [ "anonymous reviewer 9741" ], [ "anonymous reviewer b2da" ], [ "George Dahl" ], [ "Andrew Maas" ], [ "Yann Dauphin" ], [ "Yann Dauphin" ], [ "Yann Dauphin" ], [ "Marc Shivers" ], [ "Yann Dauphin" ] ], "structured_content_str": [ "{\"title\": \"review of Big Neural Networks Waste Capacity\", \"review\": \"This papers show the effects of under-fitting in a neural network as the size of a single neural network layer increases. The overall model is composed of SIFT extraction, k-mean, and this single hidden layer neural network. The paper suggest that this under-fitting problem is due to optimization problems with stochastic gradient descent.\\n\\nPros\\nFor a certain configurations of network architecture the paper shows under-fitting remains as the number of hidden units increases.\\n\\nCons\", \"this_paper_makes_many_big_assumptions\": \"1) that the training set of millions of images is labelled correctly.\\n2) training on sift features followed by kmeans retains enough information from the images in the training set to allow for proper learning to proceed.\\n3) a single hidden layer network is capable of completely fitting (or over-fitting) Imagenet.\\n\\nWhile the idea seems novel, it does appear to be a little rushed. Perhaps more experimentation with larger models and directly on the input image would reveal more.\"}", "{\"title\": \"review of Big Neural Networks Waste Capacity\", \"review\": \"The net gets bigger, yet keeps underfitting the training set. Authors suspect that gradient descent is the culprit. An interesting study!\"}", "{\"review\": \"The authors speculate that the inability of additional units to reduce\\nthe training error beyond a certain point in their experiments might\\nbe because 'networks with more capacity have more local minima.' How\\ncan this claim about local minima be reconciled with theoretical\\nasymptotic results that show that, for certain types of neural\\nnetworks, in the limit of infinite hidden units, the training problem\\nbecomes convex?\\n\\nAs far as I can tell from the description of the experiments, they\\nused constant learning rates and no momentum. If getting the best\\ntraining error is the goal, in my experience I have found it crucial\\nto use momentum, especially if I am not shrinking the learning\\nrate. The experimental results would be far more convincing to me if\\nthey used momentum or at least tried changing the learning rate during\\ntraining.\\n\\nThe learning curves in figure 3 show that larger nets reach a given\\ntraining error with drastically fewer updates than smaller nets. In\\nwhat sense is this an optimization failure? Without a more precise\\nnotion of the capacity of a net and how it changes as hidden units are\\nadded, the results are very hard to interpret. If, for some notion of\\ncapacity, the increase in capacity from adding a hidden unit decreases\\nas more hidden units are added, then we would also expect to see\\nsimilar results, even without any optimization failure. How many\\nhidden units are required to guarantee that there exists a setting of\\nthe weights with zero training error? Why should we expect a net with\\n15,000 units to be capable of getting arbitrarily low training error\\non this dataset? If instead of sigmoid units the net used radial basis\\nfunctions, then with a hidden unit for each of the 1.2 million\\ntraining cases I would expect the net to be capable of zero\\nerror. Since the data are not pure random noise images, surely fewer\\nunits will be required for zero error, but how many approximately? \\nWithout some evidence that there exists a setting of the weights that\\nachieve lower error than actually obtained, we can't conclude that the\\noptimization procedure has failed.\"}", "{\"review\": \"Interesting topic. Another potential explanation for the diminishing return is the already good performance of networks with 5k hidden units. It could be that last bit of training performance requires fitting an especially difficult / nonlinear function and thus even 15k units in a single layer MLP can't do it. On such a large training set any reduction is likely statistically significant though, so it might help to zoom in on the plot or give error rates for the 5k and larger networks. Right now I think it's unclear whether the training error asymptotes because that's the best nearly any learning algorithm could do or because the single hidden layer is wasting capacity. More comparisons or analysis can help eliminate the alternate explanation.\"}", "{\"reply\": \"Thanks to your comment, we have clarified our argument. The main point is not that the training error does not fall beyond a certain point, the main point is that there are *quickly diminishing returns for added number of hidden units* to the point where adding capacity is almost useless. Since measuring VC-dimension is impractical (and not practically relevant here, because we really care about a notion of effective capacity taking into\\naccount the limitation of the optimization algorithm), the notion of 'capacity' that we care about is basically measured by the number of training examples we are able to nail with a given network size and a given budget of training iterations. So in terms of the paper, you have to look at Figure 2, not Figure 1. We have redone Figure 2 to clarify that after 5000 examples, each hidden unit brings less benefit than if it was hardcoded to handle one of the training errors. A fading ROI on the *training error* means that it's harder and harder to make use of the added hidden units, i.e., that the extra capacity brought in by each added hidden unit *decreases* as we consider larger nets. We hypothesize this low ROI on the training error is why people have observed low ROI on the *test* set. That is why we suggest it is worthwhile to investigate methods that will increase the ROI from larger models.\\n\\nWe are not saying that the optimization issue is necessarily due to local minima. We say it could be local minima or ill-conditioning (the two main types of optimization difficulties one can imagine for neural nets).\\n\\nRegarding the results with infinite number of hidden units and convex training, there is no contradiction: with an infinite number of hidden units (or equivalently, one per training example), you only need to train the output weights, and that is convex. Here, the number of hidden units is still smaller than the number of training examples. The we believe that the optimization difficulty is with training the lower layers.\\n\\nThe learning rate is decreased by 5% each time the training error goes up after an epoch.\\n\\nWe are planning in a second phase of this work to experiment with a wider array of training techniques and architectures to compare their ROI curves, and momentum.\"}", "{\"reply\": \"Thanks for your suggestion. We didn't plot the cross-entropy it is harder to interpret, but it might be interesting in comparison with the training error curve.\"}", "{\"reply\": \"Interesting point, the asymptote in Figure 1 could be explained by the optimization problem becoming more difficult. However, this does not conflict with our argument. We have clarified this in the paper. Our argument relies on Figure 2, which shows the return on investement for adding units. We see that the ROI quickly decreases, even when going from 2000 to 5000 units it decreases an order of magnitude. If the optimization problem did not get harder, we would have expected the ROI to be close to constant, but it seems the optimization becomes harder as more units are added. What's more, beyond 5000 units the ROI falls below the line of 1 error reduced per unit. If there was no optimization problem, the ROI should at least be 1 because the additional unit can be used as a template matcher for one of the training errors.\"}", "{\"reply\": \"Have you looked at the decrease in the cross-entropy optimization objective, rather than training error, as a function of number of hidden units? It would be interesting to see a version of Figure 2 that compared the decrease in cross-entropy as you add hidden units with the decrease you would get if your additional hidden units memorized the previously most costly mislabellings.\"}", "{\"reply\": \"The 3 assumptions can be thought of has 3 conditions that are necessary for the model to be able to fit ImageNet. In traditional experiments this would be true, however, in this case we are only monitoring *training* error. To learn the training set, only one assumption is necessary: no training image has an exact duplicate with a different label. In this case, the model can at least learn a KNN-like function that gives 0 error.\\n\\nAs for more experiments, we are planning experiments starting from the raw images.\"}" ] }
g6Jl6J3aMs6a7
Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in DeSTIN
[ "Steven R. Young", "Itamar Arel" ]
This paper presents a basic enhancement to the DeSTIN deep learning architecture by replacing the explicitly calculated transition tables that are used to capture temporal features with a simpler, more scalable mechanism. This mechanism uses feedback of state information to cluster over a space comprised of both the spatial input and the current state. The resulting architecture achieves state-of-the-art results on the MNIST classification benchmark.
[ "feature extractor", "recurrent online clustering", "destin", "basic enhancement", "transition tables", "temporal features", "simpler", "scalable mechanism" ]
reject
https://openreview.net/pdf?id=g6Jl6J3aMs6a7
https://openreview.net/forum?id=g6Jl6J3aMs6a7
ICLR.cc/2013/conference
2013
{ "note_id": [ "GGdathbFl15ug", "8BGL8F0WLpBcE" ], "note_type": [ "review", "review" ], "note_created": [ 1362391440000, 1362163920000 ], "note_signatures": [ [ "anonymous reviewer 675f" ], [ "anonymous reviewer 6b68" ] ], "structured_content_str": [ "{\"title\": \"review of Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in\\n DeSTIN\", \"review\": \"The paper presents an extension to the author's prior 'DeSTIN' framework for spatio-temporal clustering. The lookup table that was previously used for state transitions is replaced by a feedback, output-to-input loop that somewhat resembles a recurrent neural network. However so little information is provided about the original system that it is difficult to tell if this is an advantage or not. The paper would be a lot clearer and more self-contained if it described and motivated DeSTIN before introducing the new algorithm.\\n\\nThe method is first applied to binary classification with toy sequences. The sequences are not defined, except that the two classes differ only in the first element - making it a memory recall task. The results suggest that the architecture has difficulty retaining information for long periods, with accuracy close to random guessing after 30 timesteps. They also seem to show that the number of centroids controls the underfitting/overfitting of the algorithm.\\n\\nThe paper claims 'state-of-the-art-results on the MNIST classification benchmark'; but the recorded error rate (1.29%) is a long way from the current benchmark (0.23%) - see http://yann.lecun.com/exdb/mnist/. Only 15,000 of the training cases were used, which somewhat mitigates the results. However the statement in the abstract should be changed. The experimental details are very scarce, and I doubt they could be recreated by other researchers.\"}", "{\"title\": \"review of Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in\\n DeSTIN\", \"review\": \"Improves the DeSTIN architecture by the same authors.\", \"they_write_on_mnist\": \"A classification accuracy of 98.71% was achieved which is comparable to results using the first-generation DeSTIN architecture [1] and to results achieved with other state-of-the-art methods [4, 5, 6].\\n\\nHowever, the error rate of the state-of-the-art method on MNIST is actually five times better: 99.77% (Ciresan et al, CVPR 2012). Please discuss this.\"}" ] }
eQWJec0ursynH
Barnes-Hut-SNE
[ "Laurens van der Maaten" ]
The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects.
[ "algorithm", "n log n", "embedding technique", "visualization", "data", "scatter plots", "new implementation", "trees", "sparse pairwise similarities", "input data objects" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=eQWJec0ursynH
https://openreview.net/forum?id=eQWJec0ursynH
ICLR.cc/2013/conference
2013
{ "note_id": [ "24bs4th0sfgwE", "DyHSDHfKmbDPM", "Dkj3DFf4GZJPh", "TTxAqxZdhgIV0", "2VfI2cAZSF2P0", "pA91py2CW8AQg", "Hy8wy4X01CHmD", "AZcnMdQBqGZS4", "H3-iUVuyZzUgh" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1362833520000, 1362421080000, 1362177000000, 1362330660000, 1362192420000, 1362758580000, 1363113120000, 1362833640000, 1365114600000 ], "note_signatures": [ [ "anonymous reviewer c262" ], [ "Laurens van der Maaten" ], [ "anonymous reviewer d9db" ], [ "Laurens van der Maaten" ], [ "anonymous reviewer 7db1" ], [ "Laurens van der Maaten" ], [ "Laurens van der Maaten" ], [ "Alex Bronstein" ], [ "Zhirong Yang" ] ], "structured_content_str": [ "{\"title\": \"review of Barnes-Hut-SNE\", \"review\": \"The paper addresses the problem of low-dimensional data embedding for visualization purposes via stochastic neighbor embedding, in which Euclidean dissimilarities in the data space are modulated by the Gaussian kernel, and a configuration of points in the low-dimensional embedding space is found such that the new dissimilarities in the embedding space obtained via the Student-t kernel match the original ones as closely as possible in the sense of the KL divergence. While the original algorithm is O(n^2), the authors propose to use a fast multipole technique to reduce complexity to O(nlogn). The idea is original and the reported results are very convincing. I think it is probably one of the first instances in which an FMM technique is used to accelerate local embeddings.\", \"pros\": \"1.\\tThe idea is simple and is relatively easy to implement. The authors also provide code.\\n2.\\tThe experimental evaluation is large-scale, and the results are very convincing.\", \"cons\": \"1.\\tNo controllable tradeoff between the embedding error and acceleration.\\n2.\\tIn its current setting, the proposed approach is limited to local similarities only. Can it be extended to other settings in which global similarities are at least as important as the local ones? In other words, is it possible to apply a similar scheme for MDS-type global embedding algorithms?\"}", "{\"review\": \"I have experimented with dual-tree variants of my algorithm (which required only trivial changes in the existing code), experimenting with both quadtrees and kd-trees as the underlying tree structures. Perhaps surprisingly, the dual-tree algorithm has approximately the same accuracy-speed trade-off as the Barnes-Hut algorithm (even when redundant dual-tree computations are pruned) irrespective of what tree is used.\\n\\nI think the main reason for this result is that after computing an interaction between two cells, one still needs to figure out to which points this interaction needs to be added (i.e. which points are in the cell). This set of points can either be obtained using a full search of the tree corresponding to the cell, or by storing a list of children in each node during tree construction. Both these approaches are quite costly, and lead the computational advantages of the dual-tree algorithm to evaporate. (The dual-tree algorithm does provide a very cheap way to estimate the value of the t-SNE cost function though.)\\n\\nI will add these results in the final paper.\"}", "{\"title\": \"review of Barnes-Hut-SNE\", \"review\": \"Stochastic neighbour embedding (SNE) is a sound, probabilistic method for dimensionality reduction. One of its limitations is that its complexity is O(N^2), where N is the, typically large, number of data points. To surmount this limitation, the this paper proposes computational methods to reduce the computational cost to O(NlogN), while only incurring an O(N) memory cost.\\n\\nIn the SNE variant discussed in this paper, the kernel in high dimensions is Gaussian, while the similarity in low dimensions is governed by a t-distribution. The proposed method consists of two components. First, the exponential decay of Gaussian measures is used to carry out truncation and construct a vantage-point tree for the data in high dimensions. This enables the authors to carry our nearest neighbour search in O(NlogN). The second component addreses the efficient computation of the gradient of SNE. Here, the paper proposes a 2D Barnes-hut algorithm to approximate the gradient in O(NlogN) steps. The Barnes-Hut algorithm is a well known method in N-body simulation, but it has not been used in this context previously to the best of my knowledge.\\n\\nThe paper is very well written. The contribution is correct and sound. Not surprisingly, the experiments show great improvements in computational performance, thus allowing for a good dimensionality reduction technique to become more broadly applicable.\\n\\nThe author ought to be commended for making the code available. He should also be commended for making the limitations of the approach very clear in the concluding remarks, namely that the current version is only for 2D-embeddings and that the method does not offer a way of controlling the error (e.g. via error bounds).\", \"minor_typo_in_page_2_last_line\": \"to slow should be too slow.\\n\\nI believe the paper makes a good contribution. However, it has one crucial shortcoming that must be addressed by the author. Specifically, there is a great body of literature on N-body methods for machine learning problems that the author does not seem to be aware of. I think this work should be placed in this context and that appropriate references and comparisons (for which I will point the author to online software) should be included in the final form in this paper. The relevant work includes:\\n\\n1. All the dual-tree approximations developed by Alex Gray at http://www.fast-lab.org/\\nIn particular note that his methods apply to nearest neighbour search and the type of kernel density estimates required in the computation of the gradient. Dual trees also allow for the use of error bounds. For publications, see e.g.\\nGray, Alexander G., and Andrew W. Moore. 'N-Body'problems in statistical learning.' Advances in neural information processing systems (2001): 521-527.\\nLiu, Ting, Andrew W. Moore, Alexander Gray, and Ke Yang. 'An investigation of practical approximate nearest neighbor algorithms.' Advances in neural information processing systems 17 (2004): 825-832.\\n\\n2. The multipole methods developed in Ramani Duraiswami lab, including:\\nYang, Changjiang, Ramani Duraiswami, Nail A. Gumerov, and Larry Davis. 'Improved fast gauss transform and efficient kernel density estimation.' In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pp. 664-671. IEEE, 2003.\\n\\n3. The algorithms for fast kernel density estimates from Nando de Freitas' lab. See e.g.,\\nMahdaviani, Maryam, Nando de Freitas, Bob Fraser, and Firas Hamze. 'Fast computational methods for visually guided robots.' In IEEE International Conference on Robotics and Automation, vol. 1, p. 138. IEEE; 1999, 2005.\\nLang, Dustin, Mike Klaas, and Nando de Freitas. 'Empirical testing of fast kernel density estimation algorithms.' UBC Technical repor 2 (2005).\\nOne of his papers does, in fact, discuss multipole methods for SNE and presents results using the fast Gauss transform:\\nDe Freitas, Nando, Yang Wang, Maryam Mahdaviani, and Dustin Lang. 'Fast Krylov methods for N-body learning.' Advances in neural information processing systems 18 (2006): 251.\", \"the_code_is_available_here\": \"http://www.cs.ubc.ca/~awll/nbody_methods.html\\n\\n4. The cover tree for nearest neighbour search, introduced in:\\nBeygelzimer, Alina, Sham Kakade, and John Langford. 'Cover trees for nearest neighbor.' In MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, vol. 23, p. 97. 2006.\\nFor code, see the Wikipedia entry: http://en.wikipedia.org/wiki/Cover_tree\\n\\n5. FLANN - Fast Library for Approximate Nearest Neighbors developed by Marius Muja. This is a powerful library of methods including randomized kd-trees and k-means methods for fast nearest neighbour search. It is extremely popular in computer vision. For code and more info see: http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN\\nYou could use this code easily to replace the nearest neighbour search and compare performance.\\n\\nFinally, there is something very interesting in this paper that is worth studying further. Assume we use an N-body method in the computation of the gradient, which has error bounds. Then, it seems to stand to reason that one ought to use loose bounds in the beginning of the gradient iterations and increase the precision as the algorithm progresses. This could allow for further improvements in computation. Moreover, using theoretical tools for studying the convergence of optimization algorithms, one could possibly address the theoretical analysis of this algorithm.\"}", "{\"review\": \"Thanks a bunch for these insightful reviews and for the useful pointers to related work (some of which I was not aware of)!\\n\\nIn preliminary experiments, I compared locality-sensitive hashing and vantage-point trees in the initial nearest-neighbor (in the high-dimensional space). I found vantage-point trees to perform considerably better, which is why I used them in the final implementation. The strong performance I obtained when using metric trees appears to be in line with the results presented by Liu, Moore, Gray & Yang (2004). I agree with the first reviewer that there are many other (approximate) nearest-neighbor algorithms that could be used here instead. I will clarify this in the paper, and include references to relevant related work.\\n\\nThe work by Nando de Freitas's lab on n-body simulations is very interesting indeed. I don't think it can readily be applied to t-SNE though, as it appears to heavily rely on the (improved) fast Gauss transform, i.e. on the assumption that Gaussian kernels are used. To the best of my knowledge, there is no existing work that uses fast multipole methods to evaluate Student-t kernels (the fast Gauss transform is an example of a fast multipole method), so extension of this work to t-SNE appears non-trivial. It is also unclear whether fast multipole methods would actually outperform Barnes-Hut in practice, because multipole methods tend to have constants that are much worse. Having said that, this is indeed a very interesting direction for future work! I will clarify this in the paper, and make sure to include the relevant references.\\n\\nI was not aware of the work by Alex Gray's lab on dual-tree algorithms for n-body simulations; indeed, this work seems readily applicable to t-SNE. I'm presently coding up a dual-tree version of my algorithm, and will try to include empirical evaluations with the dual-tree approach in the final version of the paper. I hope to post an updated version of the paper with these results on Arxiv in a week or two.\\n\\nI agree with the first reviewer that it is interesting to study if the accuracy-speed trade-off can be adapted during the optimization, but I am not sure that I agree that looser bounds should be used in the beginning of the optimization. In fact, the first 100 or so iterations are essential in identifying the global structure of the data --- doing a poor job in those iterations often implies getting stuck in poor local optima. (I guess one can think of it as errors propagating over time in the optimization.) So an optimal strategy may actually be the opposite of what the reviewer suggests: use tight bounds in the early stages of the optimization and looser bounds later on. It's certainly an interesting direction for future work!\"}", "{\"title\": \"review of Barnes-Hut-SNE\", \"review\": \"The submitted paper proposes a more efficient implementation of the Student-t distributed version of SNE. t-SNE is O(n^2), and the proposed implementation is O(nlogn). This offers a substantial improvement in the efficiency, such that very large datasets may be embedded. Furthermore, the speed increase is obtained through 2 key approximations without incurring a penalty on accuracy of the embedding.\\n\\nThere are 2 approximations that are described. First, the input space nearest neighbors are approximated by building a vantage-point tree. Second, the approximation of the gradient of KL divergence is made by splitting the gradient into attractive and repulsive components and applying a Barnes-Hut algorithm to estimate the repulsive component. The Barnes Hut algorithm uses a hierarchical estimate of force. A quad-tree provides an efficient, hierarchical spatial representation. \\n\\nThe submission is well-written and seems to be accurate. The results validate the claim: the error of the embedding does not increase, and the computation time is decreased by an order of magnitude. The approach is tested on MNIST, NORB, TIMIT, and CIFAR. Overall, the contribution of the paper is fairly small, but the benefit is real, given the popularity of SNE. In addition, the topic is relevant for the ICLR audience.\"}", "{\"review\": \"I updated the paper according the reviewers' comments, and included results with a dual-tree implementation of t-SNE in the appendix. The updated paper should appear on Arxiv soon.\"}", "{\"review\": \"In typical applications of Barnes-Hut (like t-SNE), the force nearly vanishes in the far field, which allows for averaging those far-field forces without losing much accuracy.\\n\\nIn algorithms that minimize, e.g., the squared error between two sets of pairwise distances, I guess you could do the opposite. The force exerted on a point is then dominated by interactions with distant points, so you should be able to average over the interactions with nearby points without losing much accuracy. However, it's questionable whether such an approach would be as efficient because, in general, a point has far fewer points in its near field than in its far field (i.e. far fewer points for which we can average without losing accuracy). \\n\\nHaving said that, I have never tried, so I could be wrong.\"}", "{\"review\": \"Laurens, have you thought about using similar ideas for embedding algorithms that also exploit global similarities (like multidimensional scaling)? I think in many types of data analysis, this can be extremely important.\"}", "{\"review\": \"Great work, congratulations! It seems we and you have simultaneously found essentially the same solution. Our paper and software are here:\\n\\nZhirong Yang, Jaakko Peltonen, Samuel Kaski. Scalable Optimization of Neighbor Embedding for Visualization. Accepted to ICML2013.\", \"preprint_and_software\": \"http://research.ics.aalto.fi/mi/software/ne/\\n\\n\\nBest regards,\\nZhirong, Jaakko, Samuel\"}" ] }
fm5jfAwPbOfP6
Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
[ "Yuanlong Shao" ]
One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference.
[ "bayesian inference", "neuron networks", "boltzmann machines", "conjecture", "deep learning", "classical connectionist viewpoint", "deep networks", "knowledge", "detailed correspondence" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=fm5jfAwPbOfP6
https://openreview.net/forum?id=fm5jfAwPbOfP6
ICLR.cc/2013/conference
2013
{ "note_id": [ "QQ1JEKYFTIQhj", "B4qSE6NM3ZEOV", "1JfiMxWFQy15Z", "88txIZ2gY7lJh" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362262200000, 1362383640000, 1361988540000, 1362392700000 ], "note_signatures": [ [ "anonymous reviewer 4490" ], [ "Yuanlong Shao" ], [ "anonymous reviewer caa8" ], [ "anonymous reviewer ef61" ] ], "structured_content_str": [ "{\"title\": \"review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines\", \"review\": \"This paper proposes a scheme for utilizing LNP model neurons to perform inference in Boltzmann Machines. The contribution of the work is to map a Boltzmann Machine network onto a set of LNP model units and to demonstrate inference in this model.\\n\\nThe idea of using neural spiking models to represent probabilistic inference is not new (see refs. at end). The primary contribution of this work is to take a learned deep Boltzmann machine from the literature, and to implement this network using LNP neurons, with the necessary modifications. Therefore, the contribution is specific to the deep Boltzmann machine architecture. The existing work in the literature often takes a different approach: taking realistic neural models and asking how these models can represent variables probabilistically.\", \"pros\": \"Developing mappings between machine learning algorithms and neural responses is an important direction.\\nTo my knowledge, the implementation of a deep-BM with spiking neurons is novel.\", \"cons\": \"The clarity of the text and presentation of the mathematics needs improvement.\\nThe resulting model suffers from some non-biological phenomenology.\\nThe empirical results are not very compelling.\\nI would liked to have seen a comparison to the existing approaches for using spiking neurons to implement inference. Particularly: [2-5]. Is there not a mapping from those models to the deep BM? Why is the proposed mapping necessary, or what are the limitations of those existing proposals for a deep BM?\", \"other_comments\": \"The paper provides a lengthy introduction to LNP and inference. I would encourage the author to justify the various details that are introduced, and those that are not directly relevant for the proposed network, should be left out. In general, the exposition needs clarification.\\n\\nThe proposed network seems like a logical series of steps, but the end result leads to a biologically implausible network, (at least when considering known properties of cortex). I think a broad approach might be warranted for this problem. For example, starting from the LNP model and using this model as an element in a Boltzmann machine.\", \"a_related_note\": \"Isn't it just more plausible to estimate a deep-network with positive only weights? (to deal with Dale's law) There is likely some work to be done there, but it seems this direction wouldn't require the paired neurons you have here. Or a network with realistic excitatory-inhibitory ratio?\\n\\nWhy not start with a Poisson-unit Boltzmann machine, and examine its properties? see (Welling et al. 2005)\\n\\nI found the empirical evaluation to be weak. I don't understand how running the network is a demonstration of correct inference. Wouldn't we expect each of these networks to diverge and sample different parts of the posterior?\\n\\nThe statistics in Figure 5 need more justification. I did not understand why these are relevant, or what degree of variability should be acceptable.\\n\\nThere are, of course, a variety of biological issues that seem to be incongruent with the proposal.\", \"in_cortex_the_distribution_of_excitatory_to_inhibitory_neurons_is_4\": \"1. The current proposal seems to require 1:1.\\nThe pairing of neuron weights seems unlikely, but maybe this could be solved through learning?\\nWhat about mean firing rates?, are these consistent between model and cortical responses?\\n\\nThe title is a little misleading. I might suggest something more like:\\nNetworks of LNP neurons are capable of performing Bayesian inference on Boltzmann machines.\\n\\nThe work of Shi and Griffiths 2009 seems highly relevant, and addresses some of the questions posed by the author.\\n\\nNote that Dale's law is not generally applicable, but I am not sure about any refuttaion in cortex, which I assume is where you would imagine the deep network. (see co-transmission)\\n\\n[1] Welling, M., Rosen-Zvi, M., and Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. Advances in Neural Information Processing Systems 17, pages 1481-1488. MIT Press, Cambridge, MA.\\n\\n[2] Shi, L., & Griffiths, T. L. (2009). Neural implementation of hierarchical Bayesian inference by importance sampling. Advances in Neural Information Processing Systems 22\\n\\n[3] Ma WJ, Beck JM, Pouget A (2008) Spiking networks for Bayesian inference and choice. Current Opinion in Neurobiology 18, 217-22.\\n\\n[4] Pecevski D, Buesing L, Maass W (2011) Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons. PLoS Comput Biol 7(12): e1002294\\n\\n[5] J\\u00f3zsef Fiser, Pietro Berkes, Gerg\\u0151 Orb\\u00e1n, M\\u00e1t\\u00e9 Lengyel. Statistically optimal perception and learning: from behavior to neural representations', Trends Cogn Sci. 2010. 14(3):119-130\"}", "{\"review\": [\"Thank you very much for the valuable reviews and references! I learned quite a lot from reading the suggested papers.\", \"--> For Reviewer caa8:\", \"Regarding the question raised in the end of your review, I think a somewhat related question is why neurons use spikes and whether we shall follow that in our computational model. I was previously approaching this question by exploring whether the combined semi-stochastic algorithm does something similar to simulated annealing, resulting in better local optima estimated in variational inference. But I failed to show this kind of superiority. In all the experiments I did, pure variational inference performs better than semi-stochastic inference in the context of classification with DBM on MNIST (probably because the DBM is well learned and the posteriors are well shaped with only one significant mode, also see the next paragraph).\", \"Another possibility of answering this is that, according to neural coding literature, such as works from Aurel Lazar, spikes are efficient ways of encoding time-varying real-valued functions if they are band limited (smooth in some sense). So neurons may be constrained by the energy they can use and have to choose the spiking approach. If that is the case, the 'randomness' of LNP is not that important, what's important of spikes are their property of reconstructing the function. And indeed more realistic neuron models such as Leaky Integrate-and-Fire (LIF) and Hodgkin-Huxley (HH) are not random (Chapter 5 of <6> provide a review about randomness vs. chaotic property of neurons). Are the spikes generated only to meet the reconstruction requirement? I think this worth further investigation. What I do found after submitting this paper is that, in the classification experiments I just mentioned, LIF networks work much better than LNP networks because the converged spiking pattern in LIF is periodic and the activation we get from convolving the recent spiking history is more stable, but if I replace the pseudo-random number generator in LNP by a quasi-random number generator (an effort to get more evenly distributed random samples), LNP and LIF behaves similarly. Another finding is that, in my recent GPU-cluster implementation of stochastic networks, the only way I can balance the data transmission with the computation is by transmitting packed binary spikes among computation nodes. So maybe LIF and HH are merely certain kinds of quasi-random generators used to do variational inference in an economic way.\", \"--> For Reviewer 4490\", \"I think the Poisson-unit BM direction is interesting, and I will definitely explore this in the future. Thanks for the review. In the supplemental material, I have a detailed justification about in what sense the discrete time Bernoulli model can be considered as approximations to the continuous time Poisson model. So for this paper, I think the foundation is OK.\", \"I agree to your comments that this paper is specific to the deep Boltzmann machine architecture. I didn't rule out the possibility that neurons can represent other models. But as long as they can represent Boltzmann machines with hidden variables, then the representation power is promised. As a compact universal approximator, even if behavior experiments reveals different types of probability models, they are not necessarily conflicting and may still be implemented by DBM. In addition, what's in my paper is extendable to high-order Boltzmann machines, as long as convolution with D function in section 2 of my paper can be eliminated (which I already validated on LIF and will put in my future publications).\", \"As to whether it is enough to have positive weights only, I'm not sure. If the weights are symmetric, then the question is whether a Boltzmann machine with positive weights only is still universal for knowledge representation. I believe the answer is yes, because if yes, then the brain could be doing learning by using positive weights only in most of the normal time but add additional negative weights on demand when serious mistakes are made which need to be revised effectively. This way positive weights will be dominating but not exclusive. So yes I also think this issue needs to be delayed when we deal with neurally plausible learning. If Boltzmann machine with positive only weights and a constant bias is still universal, the original Boltzmann machine without the softmax manipulation in my paper could suffice for LNP modeling.\", \"For whether the network will diverge to other parts of the posterior, I think this is very probable, since variational inference is a local optimum algorithm, and by my justification in the supplemental material, the semi-stochastic inference algorithm also approaches the local optima of variational inference. The good news is that by <3>, if a model is well learned, variational inference is good in the sense that the variational lower bound is close to the true likelihood, we can interpreting this result as the true posterior is highly single-moded. This alleviates the problem of local optima.\", \"For low mean firing rate, I already considered this. A broader question is that if the nonlinear activation is not sigmoid (such as the Figure 1 in my paper), or if the maximum output of the activation function is low, can we still consider the LNP as a inference procedure. My answer is yes. In deriving the variational inference, we obtain the sigmoid activation from differentiating the KL-divergence loss function. If the activation is not sigmoid, we can easily reverse this derivation to obtain the new loss function, and by taking a difference to the original loss function, we get a regularization term. With different ways of fitting LNP to a Boltzmann machine, the regularization term can be different. Most of the time such regularization term is such that they favors low activity, which is reasonable. In this way, a different activation function can be regarded as a regularized variational inference. I can put more about this issue in the next revision if the paper is accepted.\", \"--> In the following I put a brief discussion of the reference papers provided by the reviewers. Please let me know if I made any mistake on my interpretation of these papers, as I read them in a hurry.\", \"J\\u00f3zsef Fiser, Pietro Berkes, Gerg\\u0151 Orb\\u00e1n, M\\u00e1t\\u00e9 Lengyel. Statistically optimal perception and learning: from behavior to neural representations', Trends Cogn Sci. 2010. 14(3):119-130\", \"This paper previously inspired me a lot. Although there is no specific computational models proposed in this paper, most of the schemes which they discussed are what I'm following in my paper. For example, (1) they mentioned that sampling-based approaches, compared to parameter-based approach, has direct option for learning, but they lack the basis for experimental test. My work connects abstract computational models to LNP spiking neurons, allowing direct test (my on-going work right now extends this to Leaky Integrate-and-Fire and Hodgkin-Huxley as well). (2) They also mentioned that parameter-based approaches may suffer from the exponential explosion in the required size of neurons, while connectionist models such as Boltzmann machine rely on distributed coding which don't suffer from this problem. Furthermore, Boltzmann machines may be compact universal approximators. In this sense, modeling knowledge representation in terms of Boltzmann machines would be safe as long as the learnability issue can be addressed. (3) They also mentioned the 'spontaneous activity'. One of the issues when interpreting spikes as samples is that Monte-Carlo methods such as Gibbs sampling does not converge to the right distribution when all neurons sample together in parallel, while my approach provides an alternative viewpoint as a stochastic approximation of variational inference. The variational distribution one can get can be considered as an approximation of a mode of the joint posterior <4>, thus the activities in different neurons will be correlated according where the mode is located, this leads to an explanation of 'spontaneous activity'. (4) They also mentioned that inference and learning should be considered together when talking about representation. I didn't have it in my current paper, but my on-going work, built on this model, relates STDP to backpropagation, and by <1>, error-driven learning is hopeful to yield consistent probabilistic models with a properly chosen learning scheme. I will make these works available once the learning rule is tested in actual learning tasks. Also, if one favors likelihood-based learning such as contrastive divergence, the description on how hippocampus works in <2> implies that the positive phase of contrastive divergence, if implemented by variational inference as in <3>, can be preserved as short-term memory in the brain. The negative phase of contrastive divergence may be implemented by dreaming <5> (these early works about unlearning/reverse learning are about Hopfield Networks. But HN is a thresholded version of variational inference in Boltzmann machines with hidden variables, thus in terms of representation and learning, they are highly related to each other).\", \"Reichert, D. Deep Boltzmann Machines as Hierarchical Generative Models of Perceptual Inference in the Cortex. Ph.D. Thesis. 2012.\", \"For now as what I understand, I think this paper is more of an 'analogical model', the clues they use to relate the stochastic property of spiking to machine learning is through Gibbs sampling, which has the issue I mentioned above, while my focus is more on interpreting neural spiking as variational inference.\", \"Shi, L., & Griffiths, T. L. (2009). Neural implementation of hierarchical Bayesian inference by importance sampling. Advances in Neural Information Processing Systems 22\", \"To my understanding, this paper is about the implementation of importance sampler via one-hidden layer feedforward neural networks and then use it as building block to construct a hierarchical model for both top-down generative and bottom-up inference procedure. Thus if this is neurally plausible, it stands for other things that biological neural networks can do, which does not conflict to my work that neurons can perform approximate inference on DBM. The two lines of research can be separately proceeded.\", \"Ma WJ, Beck JM, Pouget A (2008) Spiking networks for Bayesian inference and choice. Current Opinion in Neurobiology 18, 217-22.\", \"This paper is about the Probabilistic Population Code, which is another alternative for how neurons represent probability, belonging to the parameter-based approach discussed in the Fiser etc. 2010 paper above.\", \"Pecevski D, Buesing L, Maass W (2011) Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons. PLoS Comput Biol 7(12): e1002294\", \"This paper is most interesting to me for now. I cannot comment on it before I read it carefully. I will come back with another post a bit later.\"], \"reference_list\": \"<1> Joshua V. Dillon, Guy Lebanon. Stochastic Composite Likelihood. Journal of Machine Learning Research 11 (2010) 2597-2633.\\n\\n<2> O'Reilly, R. C., Bhattacharyya, R., Howard, M. D., & Ketz, N. (2011). Complementary learning systems. Cognitive Science\\n\\n<3> Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. Arti\\ufb01cial Intelligence, 5(2):448C455, 2009.\\n\\n<4> Thomas Minka. Divergence measures and message passing. Technical report, Microsoft Research, 2005.\\n\\n<5> Francis Crick and Graeme Mitchison, The function of dream sleep, Nature 304, 111 - 114 (14 July 1983); doi:10.1038/304111a0.\\n\\n<6> Wulfram Gerstner and Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 1 edition, 2002.\"}", "{\"title\": \"review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines\", \"review\": \"The paper provides an explicit connection between the linear-nonlinear-poisson (LNP) model of biological neural networks and the Boltzmann machine. The author proposes a semi-stochastic inference procedure on Boltzmann machines, with some tweaks, that can be considered equivalent to the inference of an LNP model.\\n\\nAuthor's contributions:\\n(1) Starting from the LNP neuron model the author, in detail, derives one (Eq. 5) that closely resembles a single unit in a Boltzmann machine. \\n(2) A semi-stochastic inference (Eq. 10) for a Boltzmann machine that combines Gibbs sampling and variational inference is introduced.\\n(3) Several tweaks (Sec. 4) are proposed to the semi-stochastic inference (Eq. 10) to mimic Eq.5 as closely as possible.\\n\\nPros)\\nAs I am not an expert in biological neurons and their modeling, it is difficult for me to assess the novelty fully. Though, it is interesting enough to see that the inference in the biological neuronal network (based on the LNP model) corresponds to that in Boltzmann machines. Despite my lack of prior work and details in biological neuronal models, the reasoning seems highly detailed and understandable. I believe that not many work has explicitly shown the direct connection between them, at least, not on the level of a single neuron (Though, in a high level of abstraction, Reichert (2012) used a DBM as a biological model). \\n\\nCons)\\nIf I understood correctly, unlike what the title seems to claim, the network consisting of LNP neurons do 'not' perform the exact inference on the corresponding Boltzmann machine. Rather, one possible approximate inference (the semi-stochastic inference, in this paper) on the Boltzmann machine corresponds to the LNP neural network (again, in a form presented by the author). \\n\\nI can't seem to understand how the proposed inference, which essentially samples from the variational posterior and use them to compute the variational parameters, differs much from the original variational inference, except for that the proposed method adds random noise in estimating variational params. Well, perhaps, it doesn't really matter much since the point of introducing the new inferernce scheme was to find the correspondance between the LNP and Boltzmann machine. \\n\\n\\n\\n\\n= References =\\nReichert, D. Deep Boltzmann Machines as Hierarchical Generative Models of Perceptual Inference in the Cortex. Ph.D. Thesis. 2012.\"}", "{\"title\": \"review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines\", \"review\": \"This paper argues that inference in Boltzmann machines can be performed using neurons modelled according to the Linear Nonlinear-Poisson model. The LNP model is first presented, then one variant of inference procedure for Boltzmann machine is introduced and a section shows that LNP neurons can implement it. Experiments show that the inference procedure can produce reconstructions of handwritten digits.\", \"pros\": \"the LNP model is presented at length and LNP neurons can indeed perform the operations needed for inference in the Boltzmann machine model.\", \"cons\": \"the issue of learning the network itself is not tackled here at all.\\nWhile the mapping between the LNP model and the inference process in the machine is particularly detailed here, I did not find this particularly illuminating, given that restricted Boltzmann machines were designed with a simple inference procedure with only very simple operations.\\nI find this paper provides too little new insight to warrant acceptance at the conference.\"}" ] }
0OR_OycNMzOF9
Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
[ "Sainbayar Sukhbaatar", "Takaki Makino", "Kazuyuki Aihara" ]
Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features.
[ "invariance", "image sequences", "features", "learning", "image features", "images", "invariant representations", "hardest challenges", "computer vision", "spatial pooling" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=0OR_OycNMzOF9
https://openreview.net/forum?id=0OR_OycNMzOF9
ICLR.cc/2013/conference
2013
{ "note_id": [ "lvwFsD4fResyH", "agstF_wXReF7S", "7N2E7oCO6yPiH", "Deofes8a4Heux", "A2auXgoqFvTyV", "2U4l21HEl7SVL", "IFLJkDHcu-Ice" ], "note_type": [ "review", "review", "review", "review", "comment", "review", "comment" ], "note_created": [ 1361921040000, 1362276780000, 1362203160000, 1362197040000, 1362655740000, 1361924340000, 1362650580000 ], "note_signatures": [ [ "anonymous reviewer 1dcf" ], [ "Yann LeCun" ], [ "anonymous reviewer 2c2a" ], [ "anonymous reviewer 8b0d" ], [ "Sainbayar Sukhbaatar" ], [ "Ian Goodfellow" ], [ "Sainbayar Sukhbaatar" ] ], "structured_content_str": [ "{\"title\": \"review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences\", \"review\": \"Summary:\\nThis paper proposes learning a pooling layer (not necessarily of a convolutional network) by using temporal coherence to learn the pools. Training is accomplished by minimizing a criterion that encourages the features to change slowly but have high entropy over all.\", \"detailed_comments\": \"-The method demonstrates improvement over a spatial pooling baseline\\n-The experiments here don't allow comparison to prior work on learning pools, such as the paper by Jia and Huang.\\n- The method is not competitive with the state of the art\", \"suggestions_to_authors\": \"In future revisions of this paper, please be more specific about what your source of natural videos was. Just saying vimeo.com is not very specific. vimeo.com has a lot of videos. How many did you use? Do they include the same kinds of objects as you need to classify on CIFAR-10?\\nComparing to Jia and Huang is very important, since they also study learning pooling structure. Note that there are also new papers at ICLR on learning pooling structure you should consider in the future. I think Y-Lan Boureau also wrote a paper on learning pools that might be relevant.\", \"pros\": \"-The method demonstrates some improvement over baseline pooling systems applied to the same task.\", \"cons\": \"-Doesn't compare to prior work on learning pools\\n-The method isn't competitive with the state of the art, despite having access to extra training data.\"}", "{\"review\": \"Interesting paper.\", \"you_might_be_interested_in_this_paper_by_karol_gregor_and_myself\": \"http://arxiv.org/abs/1006.0448\\nThe second part of the paper also describes a kind of pooling based on temporal constancy.\"}", "{\"title\": \"review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences\", \"review\": \"Many vision algorithms comprise a pooling step, which combines the outputs of a feature extraction layer to create invariance or reduce dimensionality, often by taking their average. This paper proposes to refine this pooling step by 1) not restricting pooling to merely spatial dimensions (so that several different features can be combined), and 2) learning it instead of deciding the structure of the pools beforehand.\\nThis is achieved by replacing the pooling step by a linear transformation of the outputs of the feature extractor (here, an autoencoder), with the constraint that all weights be nonnegative. The main intuition for training is that an invariant representation should not change much between two neighboring frames in a video. Thus, training is conducted by minimizing a cost function that combines a reconstruction error cost and a frame-to-frame dissimilarity cost: the reconstruction error cost ensures that the representation before pooling can be reconstructed from the pooled output without too much discrepancy, and the dissimilarity cost encourages two neighboring frames in a video to have similar pooled representation.\", \"two_experiments_are_provided\": \"the first one shows that training on patches from natural videos yields pool that combine similar features; the second one tests the algorithm on CIFAR10 and shows that the scheme proposed here performs better than spatial pooling.\\n\\nLearning pooling weights instead of pre-selecting them is appealing, however this work does not demonstrate the value of the advocated approach.\\nFirst, the context given is insufficient; much previous work has explored how to combine different feature maps across feature types rather than only across space, with good results; some of this work is cited here (e.g., ref. 5, Hyv\\u00e4rinen, Hoyer, Inki 2001, ref. 7 Kavukcuoglu et al. 2009), but only briefly mentioned and dismissed because (1) the clusters are required to be fixed manually, (2) clusters are required to have the same size (I am not sure why this paper mentions that, this is not true -- the clusters do have the same size in these papers but it is not a requirement), and (3) there is no 'guarantee that the optimal feature clustering can be mapped into two-dimensional space'. This is true, but the two-dimensional mapping into a topographical map is a bonus, and the same cost functions could be applied with no overlap between the pools, as in the approach advocated here, and still obtain pools that group similar features.\\nIn any case, it is not sufficient to merely state the shortcomings of these previous approaches, without showing that the method here outperforms them and that these supposed shortcomings truly hurt performance.\\nAnother line of work that should definitely be introduced (and isn't), is work enforcing similarity of representations for similar images to train coding. There has been much work on this, even also using video,\\ne.g. Mobahi, Collobert, Weston, Deep Learning from Temporal Coherence in Video, ICML 2009 -- or before that with collections of still images with continuously varying parameters, Hadsell, Chopra and LeCun, Dimensionality Reduction by Learning an Invariant Mapping (CVPR 2006), and much other work. Those older works use similarity-based losses to train encoding features rather than pooling, but this is not a real difference, which is my second point:\\n\\nSecond, comparing the pooling step here to a simple spatial pooling step is somewhat misleading; the 'auto-pooling step' in this paper is a full-fledged linear mapping, with the added restriction that the weights have to be nonnegative. Thus the system is more akin to a two-layer encoding network than a single-layer network. The distinction between 'coding' and 'pooling' is an artificial one anyways; given that auto-pooling has as many parameters are a standard coding step, it should not only be compared to the much simpler spatial pooling.\\nIn terms of performance, the performance on Cifar 10 is much below what can be obtained with a single layer of features (e.g. compare the 69.7% here to results between 68.6% and 79.6% in Coates et al.'s 'An Analysis of Single-Layer Networks in Unsupervised Feature Learning', and better performance in subsequent papers by Coates et al.), so this is indeed not very convincing.\\n\\nThe ideas combined here (learning a pooling map, using similarity in neighboring frames,\\n\\nPros/cons:\\n- pros: ideas for generalizing pooling are intuitive and appealing\\n- cons: many of these ideas have been explored elsewhere before, and this paper does not do a suitable job of delineating what the specific contribution is. In fact, it seems that the proposed approach does not have much novelty and most ideas here are already part of existing algorithms; experimental results fail to demonstrate the superiority of the proposed scheme.\"}", "{\"title\": \"review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences\", \"review\": \"The paper presents a method to learn invariant features by using temporal coherence. A set of linear pooling units are trained on top of a set of (pre-trained) features using what is effectively a linear auto-encoder with a penalty for changes over time (a 'slowness' penalty). Visualizations show that the learned weights of the pooling units tend to combine features for translated or slightly rotated edges (as expected for complex cells), and benchmark results show some improvement over hand-coded pooling units.\\n\\nThis is a fairly straight-forward idea that gives pleasing results nonetheless. The main attraction to the method proposed here is its simplicity and modularity: a linear auto-encoder and slowness penalty is very easy to implement and could be used in almost any pipeline. This is simultaneously my main concern about the method: it is significantly subsumed by prior work (though the very simple instance here might differ). For example, see the work of Zou et al. (NIPS 2012) which uses essentially the same training method with nonlinear pooling units, Mobahi et al. (ICML 2009), and work with 'slowness' criteria more generally. That said, considering the many algorithms that have been proposed to learn pooling regions and invariant features without video, the fact that an extremely simple instance like the one here can give reasonable results is worth emphasizing.\", \"pros\": \"(1) A very simple approach that appears to yield plausible invariant features and a modest bump over hand-built pooling in the unsupervised setting.\", \"cons\": \"(1) Only linear pooling units are considered. As a result they do not add much power beyond slight regularization of the linear SVM.\\n(2) Only single-layer networks are considered; results with deep layers might be very interesting.\\n(3) There is quite a lot of prior work with very similar ideas and implementations; hopefully these can be cited and discussed.\"}", "{\"reply\": \"Thank you for the detailed review. Those are good points, and we will consider them in our next revision. We also want to give some explanations.\", \"about_fixed_cluster_size\": [\"Yes. In topographic maps, clusters are not required to have the same size. We will fix this sentence in the next revision. We meant to say that those methods will require an additional mechanism to have adaptive (depends on the nature of its features) cluster sizes.\"], \"about_topographic_maps\": [\"Topographic maps may have their advantages, but I still think it puts artificial restrictions on clustering. For example, edge detectors have at least four dimensions: orientation, location and length. Therefore, an ideal clustering can be achieved by placing edge detectors in a four-dimensional map, and grouping nearby edge detectors. It will be difficult to map such a four-dimensional clustering into a two-dimensional plane.\", \"Our pooling method, on the other hand, allows any soft clustering. In addition, the clear advantage of our method over topographic maps is its modularity. The proposed method can be used with any feature learning algorithm, while topographic maps need to alter the feature learning process.\"], \"performance_comparison_to_topographic_maps\": [\"Unfortunately, we could not find any reported result by those approaches on CIFAR10 (please refer to any papers that we may have missed), which is a widely used benchmark for image classification. In the future versions, we will try to implement those algorithms and apply them to CIFAR10.\", \"About Mobahi et al.\\u2019s method:\", \"We didn\\u2019t compare our method to Mobahi et al.\\u2019s methods, because their method is not completely unsupervised. They combined supervised classification learning with unsupervised video coherence learning. I think those two cannot be separated, so their method cannot learn invariant features without labeled data. Our method, on the other hand, can be trained in a completely unsupervised way. The classification with labeled data is only used to show the effectiveness of pooling.\"], \"comparison_to_spatial_pooling\": [\"We compared our method to spatial pooling because it is the most widely used pooling method. Although spatial pooling is simple, it has an advantage of utilizing the spatial information. Since the most dominant variance in the lowest level is spatial shifts, we think that beating spatial pooling without using any spatial information is a notable result.\", \"It is true that our model has more parameters than spatial pooling, and it can be considered as an additional coding layer. Therefore, we may have to compare it to deeper networks. In future works, we will apply our pooling method to deep networks and compare it to other deep networks.\"], \"about_the_performance\": [\"The performance on CIFAR10 depends on three factors: feature learning, pooling and classification. With the same feature learning and classification setting as our experiments (autoencoder white 100 features + linear SVM), Adam Coates et al. reported 62% test accuracy in 'An Analysis of Single-Layer Networks in Unsupervised Feature Learning', which we improved to 69% by only changing the pooling step. Coates et al. showed the performance can be greatly improved by increasing the number of features. However, restricted by the computation time, we used only 100 features instead of 1600, which we think was the main reason of the poor performance. In future, we think we can greatly shorten the computation time of our method by combining auto-pooling with spatial pooling.\"], \"novelty_of_our_paper\": [\"Using similarity in neighboring frames is indeed very old idea. However, the main contribution of our paper is combination of the slow-change constraint with the low-information-loss constraint. With a very simple implementation, we showed that a pooling based on slowness can improve traditional spatial pooling, without changing the feature learning process.\"]}", "{\"review\": \"First off, let me say thank you for citing my + my co-authors' paper on measuring invariances.\\n\\nI have a few thoughts about invariance and temporal coherence that I hope you might find helpful.\\n\\nRegarding invariance, I think that invariance is not such a great property on its own. What you really want is to disentangle the different factors of variation in the dataset. Invariance plays a role in this process, because if you want one feature to correspond only to one factor of variation, it must be invariant to all of the others. But it's really a very small part of the picture. For the purposes of our paper on measuring invariances, invariance was a good enough proxy for disentangling that we could use it test the hypothesis that deep learning systems become more invariant with depth. But I don't think invariance is a good enough property to serve as the main part of your objective function.\\n\\nRegarding temporal coherence, I think a common mistake people make is to jump from the idea that features should be 'coherent' to the idea that features should be 'slow.' I think that useful features are spread over a wide spectrum of timescales. It's true that the fastest varying features are probably just noise. But the slowest varying features are probably not especially useful either. For example, if you put a camera on a streetcorner, the amount of sunlight in the scene would usually change slower than the identities of the people in the scene. I think probably the way to make progress with applications of temporal coherence is to study new ways of encouraging features to be coherent rather than just slow.\", \"some_general_suggestions_on_how_to_improve_your_results\": \"You should read Adam Coates' ICML 2011 paper, which is about finding the best training algorithm and feature encoding method for single-layer architectures. I think if you use larger dictionaries (1600 instead of 100), train using OMP-1 or sparse coding instead of sparse autoencoders, and extract using T-encoding you will do much better and have a shot at beating state of the art, or at least beating Jia and Huang. Adam Coates' work shows that sparse autoencoders don't make very good feature extractors, and also that small dictionaries don't perform very well, so you're really hurting your numbers by using that setup as your feature extractor.\\n\\nFinally, I think you're missing a few references. In particular, your approach is very closely related to Slow Feature Analysis, so you should cite Laurenz Wiskott and comment on the similarities.\"}", "{\"reply\": \"First of all, thank you for reviewing our paper. It was a valuable feedback. We will try to include mentioned papers in the next revision.\", \"about_the_video_dataset\": [\"We will include a detailed explanation in the next revision. In short, 40 short (2-5 minutes in length) videos are used in our experiments. We tried to collect videos containing the same objects as CIFAR10. However, images extracted from the videos were very different from CIFAR10 images. Many of them didn't include any object, and some only showed a small part of an object.\", \"Comparison to Jia and Huang's method:\", \"We didn't compare our method to Jia and Huang's method, because they are fundamentally different methods. While Jia and Huang's method learns pooling regions in a supervised way, our method tries to learn pooling regions in an unsupervised way, which has many advantages.\", \"Although our method uses additional data, the data used for learning pooling regions was not labeled. On the other hand, Jia and Huang's method has an advantage of using labeled data, which produces pooling regions specialized for the classification task.\"], \"comparison_to_state_of_art_methods\": [\"It is true that our result on CIFAR10 is below the state-of-art. However, as shown by Adam Coates (ICML, 2011), classification results are largely influenced by the configuration of feature learning, especially by the number of features. Since the feature learning was not our research focus, we did little tweaking and optimization in the feature learning step. Also, restricted by the computation time, we didn't use large number of features (100 instead of 1600), which is likely the main reason of the low test accuracies.\", \"In the end, let us restate the main contribution of our paper. Our pooling method is novel because it learned pooling regions in an unsupervised way. In addition, it does not use explicit spatial information and it can be used with any pre-learned features. To the best our knowledge, there is no other pooling method that suffices those conditions.\"]}" ] }
YBi6KFA7PfKo5
Two SVDs produce more focal deep learning representations
[ "Hinrich Schuetze", "Christian Scheible" ]
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012).
[ "representations", "efficient", "property", "svds", "focal deep", "key characteristic", "work", "deep learning", "neural networks" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=YBi6KFA7PfKo5
https://openreview.net/forum?id=YBi6KFA7PfKo5
ICLR.cc/2013/conference
2013
{ "note_id": [ "aK4z5qBF7bEod", "VFwT2CLWfA2kU", "vNpsUSMf3tNfx", "3wTuUWS9F_w4i" ], "note_type": [ "review", "review", "comment", "review" ], "note_created": [ 1363717680000, 1361986620000, 1363717200000, 1362188640000 ], "note_signatures": [ [ "Hinrich Schuetze" ], [ "anonymous reviewer 2448" ], [ "Hinrich Schuetze" ], [ "anonymous reviewer 4c9d" ] ], "structured_content_str": [ "{\"review\": \"Thanks for your comments! The suggestions seem all good and pertinent to us and (in case the paper should be accepted and assuming there is enough space) we will incorporate them when revising the paper. In particular: relate the new method to overview in Turney&Pantel, to kernel PCA and matrix factorization approaches; expand on discussion of focality, addressing concerns about broad applicability (if it's only used as a diagnostic, then it may not be a huge concern that it's somewhat unwieldy); discussion of Turian, Socher and Maas; more details and more thorough description of 1layer vs 2layer (we thought this was pretty directly analogous to single-layer learning vs two-layer deep learning, but will expand on this in a potentially revised version).\\n\\nWe also totally agree that ideally larger experiments on previous data sets should be done. We were hoping that more conceptual papers (introducing new methods and metrics) would be ok without immediate large experiments. We would try to conduct some larger experiments if the paper gets accepted, but cannot promise these would be ready for the conference.\"}", "{\"title\": \"review of Two SVDs produce more focal deep learning representations\", \"review\": \"This paper proposes to use two consecutive SVDs to produce a\\ncontinuous representation. This paper also introduces a property\\ncalled focality. They claim that this property may be important for\", \"neural_network\": \"many classifiers cannot efficiently handle\\nconjunctions of several features unless they are explicitly given as\\nadditional features; therefore a more focal representation of the\\ninputs can be a promising way to tackle this issue. This paper \\nopens a very important discussion thread and provides some interesting\\nstarting points. \\n\\nThere are two contributions in this paper. First, the authors define\\nand motivate the property of focality for the representation of the\\ninput. While the motivation is clear, its implementation is not\\nobvious. For instance, the description provided in the subsection\\n'Discriminative task' is hard to understand: what is really measured\\nhow it is related to the focality property. This part of the paper\\ncould be rephrased to be more explicit. The second contribution is the\\nrepresentation derived by two consecutive SVDs. I would suggest to\\nprovide a bit more of discussion about the related work like LSA, PCA,\\nor the denoising auto-encoder.\\n\\nIn the third paragraph of section 'Discussion', the authors may cite\\nthe work of (Collobert and Weston) and (Socher) for instance.\"}", "{\"reply\": \"Thanks for your comments! If the paper is accepted, we will expand the description of the discrimination task and explain in more detail how it is related to focality (the idea is that a single hidden unit does well on the discrimination -- which is what focality is supposed to capture).\\n\\nWe will also expand the discussion of related methods (LSA, PCA, denoising auto-encoder) if the paper accepted (assuming there is space -- which should be the case).\\n\\nWe will cite&discuss (Collobert and Weston) and (Socher). Pointers to other relevant literature would be appreciated.\"}", "{\"title\": \"review of Two SVDs produce more focal deep learning representations\", \"review\": \"This paper introduces a novel method to induce word vector representations from a corpus of unlabeled text. The method relies upon 'stacking' singular value decomposition with an intermediate normalization nonlinearity. The authors propose 'focality' as a metric for quantifying the quality of a learned representation. Finally, control experiments on a small collection of sentence text demonstrates stacked SVD as producing more focal representations than a single SVD.\\n\\nThe method of stacked SVD is novel as far as I know, but could perhaps be generalized to use other nonlinearities between the two SVD layers than length normalization alone. As the authors acknowledge, SVD is a linear transform so the intermediate nonlinearity is important as to have the entire method not reduce to a single linear transform. There are a huge number of ways to use matrix factorization to induce word vectors, Turney & Pantel (JAIR 2010) give a nice review. I would like to better understand the proposed method in the context of the many alternatives to SVD factorization (e.g. kernel PCA etc.). \\n\\nThe introduced notion of focality might serve as a good metric for analysis of learned representation quality. It seems however that measuring focality is only possible with brute force experiments which could make it an unwieldy tool. Expanding on focality as a tool for representation evaluation, both in theory and practice, could strengthen this paper significantly. \\n\\nThe experiments use a small text corpus to demonstrate two SVDs as producing better representations than one. There is much room for improvement in the experiment section. In particular, there are several word representation benchmarks the authors could use to assess the quality of the proposed method relative to previous work:\\n- Turian et al (ACL 2010) compare several word representations and release benchmark code. \\n- Socher et al (EMNLP 2011) release a multi-dimensional sentiment analysis corpus and use neural nets to train word representations\\n- Maas et al (ACL 2011) release a large semi-supervised sentiment analysis corpus and directly compare SVD-obtained word representations with other models\\n\\nThe experiments given are a reasonable sanity check for the model and demonstration of the introduced focality metric. However, the paper would be greatly improved by comparing to previous work on at least one of the tasks in papers listed above. \\n\\nThe 1LAYER vs 2LAYER experiment is not clearly explained. Please expand on the difference in 1 vs 2 layers and the experimental result.\", \"to_summarize\": [\"Novel layer-wise SVD approach to inducing word vectors. Needs to be better explained in the context of matrix factorization alternatives\", \"Novel 'focality' metric which could serve as a tool for measuring learned representation quality. Metric needs more explanation / analysis.\", \"Experiments don't demonstrate the model relative to previous work. This is a major omission since many recent alternatives exist and comparison experiments should be straightforward with several public datasets exist\", \"Overall paper is fairly clear but could use some work\"]}" ] }
9bFY3t2IJ19AC
Affinity Weighted Embedding
[ "Jason Weston", "Ron Weiss", "Hector Yee" ]
Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which aim to provide improved performance while retaining many of the benefits of the existing class of embedding models. Our new approach works by iteratively learning a linear embedding model where the next iteration's features and labels are reweighted as a function of the previous iteration. We describe several variants of the family, and give some initial results.
[ "models", "affinity", "linear", "wsabie", "psi", "successful", "ranking", "recommendation", "annotation tasks", "scalable" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=9bFY3t2IJ19AC
https://openreview.net/forum?id=9bFY3t2IJ19AC
ICLR.cc/2013/conference
2013
{ "note_id": [ "9A_uTWCfuoTeF", "X-2g4ZbGhE5Gf", "T5KWotfp6lot7" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362123720000, 1363646880000, 1362229560000 ], "note_signatures": [ [ "anonymous reviewer 3e4d" ], [ "Jason Weston" ], [ "anonymous reviewer 0248" ] ], "structured_content_str": [ "{\"title\": \"review of Affinity Weighted Embedding\", \"review\": \"Affinity Weighted Embedding\\n\\nPaper summary\\n\\nThis paper extends supervised embedding models by combining them multiplicatively,\\ni.e. f'(x,y) = G(x,y) f(x,y).\\nIt considers two types of model, dot product in the *embedding* space and kernel density in the *embedding* space, where the kernel in the embedding space is restricted to\\nk((x,y),(x','y)) = k(x-x')k(y-y').\\nIt proposes an iterative algorithm which alternates f and G parameter updates.\\n\\nReview Summary\\n\\nThe paper is clear and reads well. The proposed solution is novel. Combining local kernels and linear kernel in different embedding space could leverage the best characteristic for each of them (locality for non-linear, easier training for linear). The experiments are convincing. I would suggest adding the results for G alone.\\n\\nReview Details\\n\\nStep (2), i.e. local kernel, is interesting on its own. Could you report its result? The optimization problem seems harder than step (1), could you quantify how much the pretraining with step (1) helps step (2)? A last related question, how do you initialize the parameters for step (3)?\"}", "{\"review\": [\"The results of G alone are basically the 'k-Nearest Neighbor (Wsabie space)' results that are in the tables.\", \"We initialized the parameters of step 3 with the ones from step 1. Without this I think the results could be worse as you are losing a lot of the pairwise label comparisons from the training if G is sparse, so somehow because of the increased capacity, it is more possible to overfit. This may not be necessary if the dataset is big enough.\", \"Running time depends on the cost of computing G. In the imagenet experiments we did the full nearest neighbor computation (computed in parallel) which is obviously very costly (proportional to the training set size). However approximate kNN could also be considered as we said, amongst other choices of G.\"]}", "{\"title\": \"review of Affinity Weighted Embedding\", \"review\": \"This work proposes a new nonlinear embedding model and applies it to a music annotation and image annotation task. Motivated by the fact that linear embedding models typically underfit on large datasets, the authors propose a nonlinear embedding model with greater capacity. This model weights examples in the embedding by their affinity in an initial linear embedding. The model achieves modest performance improvements on a music annotation task, and large performance improvements on ImageNet annotation. The ImageNet result achieves comparable performance to a very large convolutional net.\\n\\nThe model presented in the paper is novel, addresses an apparent need for a higher capacity model class, and achieves good performance on a very challenging problem. \\n\\nThe paper is clear but has a rushed feel, with some explanations being extremely terse. Although the details of the algorithms and experiments are specified, the intuition behind particular algorithmic design choices is not spelled out and the paper would be stronger if these were.\\n\\nThe experimental results are labeled 'preliminary,' and although they demonstrate good performance on ImageNet, they do not carefully investigate how different design choices impact performance. The ImageNet performance comparisons to related algorithms are hard to interpret because of a different train/testing split, and because a recent highly performing convolutional net was not considered (though the authors discuss its likely superior performance). \\n\\nFinally, the presented experiments focus on performance on tasks of interest, but do not address the running time and storage cost of the algorithm. The authors mention the fact that their algorithm is more computationally and space-intensive than linear embedding; it would be useful to see running times (particularly in comparison to Dean et al. and Krizhevsky et al.) to give a more complete picture of the advantages of the algorithm.\"}" ] }
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification.
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
ICLR.cc/2013/conference
2013
{ "note_id": [ "gD5ygpn3FZ9Tf", "ADCLANJlZFDlw", "rr6RmiA9Hhs9i", "8nXtnZf5sU-bd", "YYiHlnPjU5YVO", "eeCgjoYcgmDco", "InujBpA-6qILy", "LpoA5MF9bm520" ], "note_type": [ "review", "comment", "review", "comment", "review", "comment", "review", "review" ], "note_created": [ 1362079980000, 1362753420000, 1362457800000, 1363382280000, 1363623420000, 1363139160000, 1362214440000, 1362753660000 ], "note_signatures": [ [ "anonymous reviewer c82a" ], [ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ], [ "anonymous reviewer fda8" ], [ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ], [ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ], [ "anonymous reviewer c82a" ], [ "anonymous reviewer 232f" ], [ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ] ], "structured_content_str": [ "{\"title\": \"review of Factorized Topic Models\", \"review\": [\"A brief summary of the paper's contributions, in the context of prior work.\", \"This paper suggests an improvement over the LDA topic model with class labels of Fei-Fei and Perona [6], which consists in the incorporation of a prior that encourages the class conditional topic distributions to either be specific to a particular class or to be 'shared' across classes. Experiments suggest that this change to the original LDA model of [6] yields topics that are sharply divided into class-specified or shared topics and that are jointly more useful as a discriminative latent representation.\", \"An assessment of novelty and quality.\"], \"i_like_the_motivation_behind_this_work\": \"designing models that explicitly try to separate the class-specific and class-invariant factors of variation is certainly an important goal and makes for a particularly appropriate topic at a conference on learning representations.\\n\\nThe novelty behind this paper is not great, since it adds a small component to a known model. But I wouldn't see this as a strong reason for not accepting this paper. There are, however, other issues which are more serious.\\n\\nFirst, I find the mathematical description of the model to be imprecise. The main contribution of this work lies in the specification of a class-dependent prior over topics. It corresponds to the product of the regular prior from [6] and a new prior factor p(\\theta | kappa), which as far as I know is not explicitly defined anywhere. The authors only describe how this prior affect learning, but since no explicit definition of p(\\theta | kappa) is given, we can't verify that the learning algorithm is consistent with the definition of the prior. Given that the learning algorithm is somewhat complicated, involving some annealing process, I think a proper, explicit definition of the model is important, since it can't be derived easily from the learning algorithm.\\n\\nI also find confusing that the authors refer to h(k) (Eq. 3) as an entropy. To be an entropy, it would need to involve a sum over k, not over c. Even the plate graphic representation of the new model is hard to understand, since \\theta is present in two separate plates (over M).\\n\\nFinally, since there are other alternatives than [6] to supervised training of an LDA topic model, I think a comparison with these other alternatives would be in order. In particular, I'm thinking of the two following alternatives:\\n\\nSupervised Topic Models, by Blei and McAuliffe, 2007\", \"disclda\": \"Discriminative Learning for Dimensionality Reduction and Classi\\ufb01cation, by Lacoste-Julien, Sha and Jordan, 2008\\n\\nI think these alternatives should at least be discussed, and one should probably be added as a baseline in the experiments. \\n\\nAs a side comment (and not as a strong criticism of this paper), I'd like to add that I don't think the state of the art for scene classification (or object recognition in general) is actually based on LDA. My understanding is that approaches based on sparse coding + max pooling + linear SVM are better. I still think it's OK for some work to focus on improving a particular class of models. But at one point, perhaps a comparison to these other approaches should be considered.\\n\\n* A list of pros and cons (reasons to accept/reject).\\n\\n|Pros|\\n- attacks an important problem, that of discovering and separating the factors of variation of a data distribution that are either class-dependent or class-shared, in the context of a topic model\\n\\n|Cons|\\n- somewhat incremental work\\n- description of the model is not enough detailed\\n- no comparison with alternative supervised LDA models\"}", "{\"reply\": \"We would like to thank the reviewers for their insightful comments about the paper. We will first provide general comments in response to issues raised by more than one reviewer, and then discuss each of the reviews in more detail.\\n\\nFrom reading the reviews, we realize that the main contribution of the paper seems to have been obscured in the presentation - for example, due to a formulation in the beginning of the abstract (which now is changed). We do not propose a new topic model, but rather introduce a method for latent factorization in topic models. The method that we propose is general and can be adopted to many different topic models. \\n\\nSeveral tasks benefit from a factorized topic space; classification - the one we use to exemplify with in the paper - is just one. Factorized models produce interpretable latent spaces, which has been exploited in continous models for synthesis, as in [A], or for ambiguity modelling or domain transfer, as in [5] (Ek et al.), [B]. We believe the benefits of this transfer to topic models as well.\\n\\nIt would be very interesting to evaluate the benefit of a factorized topic space for a much larger range of topic models than what we do in this paper - this is beyond the scope of this paper but will definitely be pursued in a future journal version. \\n\\nIn a revised version of the paper, which is now uploaded to ArXiv, we have however added results from the SLDA model of Blei and McAuliffe, as a second baseline in the experiments, as suggested by reviewers c82a and fda8. The factorized LDA consistently performs better than both the regular LDA and SLDA.\\n\\nTo stress the focus on factorization rather than a specific classification application, we have furthermore added an experiment with video classification. Other changes, as described below, are also included in this new paper version.\\n\\nNew references (included in the new version):\\n[A]\\tA. C. Damianou, C. H. Ek, M. Titsias, and N. D. Lawrence, \\u201cManifold Relevance Determination,\\u201d International Conference on Machine Learning, 2012.\\n[B]\\tR. Navaratnam, A. W. Fitzgibbon, and R. Cipolla, \\u201cThe joint manifold model for semi-supervised multi-valued regression,\\u201d IEEE International Conference on Computer Vision, 2007.\", \"reviewer_c82a\": \"We agree with reviewer c82a that we used the word entropy in a rather sloppy manner. We have strived to make the distinction clear in the revised version.\\n\\nIn Figure 1(b), theta in the main plate is connected with another theta outside, since we use all the topics in theta to compute the entropy-like information measure for each topic theta_m. In this, we adopt a graphical notation similar to [9] (Jia et al.). This is explained more thoroughly in the revised version of the paper.\\n\\nMoreover, p(theta | kappa) is proportional to F(k) in Equation (8). In the revised version of the paper, we explicitly state the form of the proposed prior.\\n\\nFinally, as reviewer c82a clearly states, topic models do not produce state-of-the-art results for scene classification (however, they do produce state-of-the-art results in other domains, such as text). The motivation for using the current classification tasks is that we find that they provide a nice intuition into why one would want a factorized representation, which is able to model separately the 'important information' (class-dependent) and the 'unimportant information/noise' (class-independent).\", \"reviewer_232f\": \"As reviewer 232f correctly states, the class-dependent and the class-independent topics jointly encode the variations in the data. The argument is not, as reviewer 232f suggests, to throw the class-independent topics away - they are important for explaining parts of the data variation. This is not suggested anywhere in the paper. There are many motivations for learning a factorization. In the example application used in the paper, classification, the class dependent topics are the important ones. However, in a transfer learning scenario, the class-independent information is highly relevant. The manner in which factorization is used is highly application and domain specific; in this paper we exemplify one use for classification.\\n\\nAs reviewer 232f points out, using a feature that has been created for discriminative methods in a generative framework might not be particularly sensible. Our motivation for still taking this approach is to make a fair comparison to other topic models, for example, [6] (Fei-Fei and Perona).\\n\\nWe have replaced the term 'view' with 'modality' in the revised version of the paper, and also clarified the relation of our factorization method to the multi-modality methods cited in Section 2. In the literature on factorized latent variable models the word 'view' is predominantly used, but we think that 'modality' is clearer here.\", \"reviewer_fda8\": \"As reviewer fda8 points out, we could achieve the same effect by using a beta distribution instead of A in Equation (7). However, it would still require a entropy-like measurement to steer the beta distribution so as to achieve the desired factorization. \\n\\nAs described above, we have added results using SLDA, and show that the factorized LDA consistently performs better than both regular LDA and SLDA. However, we did not have time to implement other variants suggested by reviewer fda8 - this is definitely something which is interesting to do for a journal version.\"}", "{\"title\": \"review of Factorized Topic Models\", \"review\": [\"This paper introduces a new prior for topics in LDA to disentangle general variance and class specific variance.\", \"The other reviews already mentioned the lack of novelty and some missing descriptions. Concretely, the definition of p(\\theta | kappa), which is central to this paper, is not clear. Instead of defining these A(k) functions in figure 7, couldn't you just use a beta distribution as a prior?\", \"In order to publish yet another variant of LDA, more comparisons are needed to the many other LDA-based models already published.\", \"In particular, this paper tackles common issues that have been addresses many times by other authors. In order to have a convincing argument for introducing another LDA-like model, some form of comparison would be needed to a subset of these:\", \"the supervised topic models of Blei et al.,\", \"DiscLDA from Lacoste,\", \"partially labeled LDA from Ramage et al.,\", \"Factorial LDA: Sparse Multi-Dimensional Text Models by Paul and Dredze,\", \"Modeling General and Specific Aspects of Documents with a Probabilistic Topic Model by Chemudugunta et al.\", \"sparsity inducing topic models of Chong Wang et al.\", \"Unless the current model outperforms at least a couple of the above related models it is hard to argue for acceptance.\"]}", "{\"reply\": \"Once again, thanks to reviewer c82a for very helpful comments. We agree that the statement regarding connection between the prior and F(k) was not correct. The parameter kappa should not be considered as a prior in the model, instead it is used as a implementation specific parameter. We have now reorganized the paper so that the model section contains a definition of the factorizing prior p(\\theta) (Eq (5)), which we believe will make things a lot clearer. Furthermore, we have added an appendix B which gives details about the training of the factorized LDA using Gibbs sampling explaining the role of kappa. The definition of F(k) has been moved in appendix B, and its relation to the prior (Eq (5)) has been made clearer.\\n\\nThe definition of the measure H of class-specificity (Eq (3)) has been made clearer by improving the notation. \\n\\nThe new version of the paper has been uploaded in arXiv, which will be public visible at Mon 18, March, 2013, 00:00:00 GMT. Thanks a lot for your time in advance.\"}", "{\"review\": \"Dear reviewers,\\nthe new version of the paper, addressing all the changes in our comments, is public visible now in arXiv. \\nThanks in advance for your time.\"}", "{\"reply\": \"The additional comparison with SLDA is a good step in the right direction and certainly improves my personal appreciation of this paper.\\n\\nUnfortunately, I still cant vouch for the validity of the learning algorithm. First, I'm now even more confused as to what the prior actually is. Indeed, the prior is stated to be proportional to F(k), which depends on a certain Delta A. Also, Delta A, as I understand it, is computed from a comparison between two topic assignments (sampled during Gibbs sampling). However, p(\\theta | kappa) should not depend on topic assignments, since it is a *prior* over the parameters of the topic assignments distribution (p(z|\\theta)). I'm under the impression that the authors are confusing the prior and the posterior over \\theta here (the latter being involved in the process of Gibbs sampling, which would indeed involve comparisons between topic assignments).\\n\\nI also still don't find it obvious how the authors derived their learning algorithm from the proposed novel prior. What is the training objective function exactly? How is Gibbs sampling involved in the gradient descent optimization on that objective? How does one derive the specific sampling process described in this paper from the general procedure of Gibbs sampling? These might seem by the authors like questions with obvious answers, but their answer would help a lot for the reader to understand the learning algorithm and be able to reimplement it.\"}", "{\"title\": \"review of Factorized Topic Models\", \"review\": \"This paper presents an extension of Latent Dirichlet Allocation (LDA) which explicitly factors data into a structured noise part (varation shared among classes) and a signal part (variation within a specific class). The model is shown to outperform a baseline of LDA with class labels (Fei-Fei and Perona). The authors also show that the model can extract class-specific and class-shared variability by examining the learned topics.\\n\\nThe authors show that the new model can outperform standard LDA on classification tasks, however, it's not clear to me why one would necessarily use an LDA-based topic model (or topic models in general) if they're just interested in classification. In the introduction, the paper motivates the use of generative models (all well known reasons - learning from sparse data, handling missing observations, and providing estimates of data uncertainty, etc.) But none of these situations are explored in the experiments. So in the end, the paper shows that a model that it not really necessarily good at classification being improved but not to the point where it's better than discriminative models and not in a context of where a generative model would really be helpful.\", \"positive_points\": [\"The method seems sound in its motivation and construction\", \"The model is shown to work on different modalities (text and images)\", \"The model outperforms classical LDA for classification\", \"Negative points\", \"As per my comments above, there may be situations in which one would want to use this type of model for classification, but they haven't been explored in this paper\", \"The argument that the model produces sparser topic representations could be more convincing: in 4.3, the paper claims that the class-specific topic space effectively used for classification consists of 8 topics, where 12 topics are devoted to modeling structured noise; however the 12 noise topics still form part of the representation. Is the argument that the non-class topics would be thrown away after learning while the class-specific topics are retained and/or stored?\"], \"specific_comments\": \"In the third paragraph of the introduction, I'm not sure about choosing SIFT as the feature extraction step of this example of inference in generative models. I can't think of examples where SIFT has been used as part of a generative model -- it seems to be a classical feature extraction step for discriminative methods. Therefore, why not use an example of features that are learned generatively for this example?\\n\\nIn Section 2, the paper begins to talk about 'views' without really defining what is meant by a 'view'. Earlier, the paper discussed 'class-dependent' and 'class-independent' variance, and now 'view-dependent' and 'view-independent' variance - but they are not the same thing (though connected, as in the next paragraph the paper describes providing class labels as an additional 'view'). Perhaps it's best just to define up-front what's meant by a 'view'. If 'views' are just being used as a generalization of classes here in describing related work, just state that. Maybe the generalization for purposes of this discussion is not even necessary and the concept of 'view' just adds confusion.\", \"section_2\": \"'only the private class topics contain the relevant _____ for class inference'.\", \"end_of_section_3\": \"'we associate *low*-entropy topics as class-dependent while *low*-entropy topics are considered as independent' ? (change second low to high)\"}", "{\"review\": \"We would like to thank the reviewers for their insightful comments about the paper. We will first provide general comments in response to issues raised by more than one reviewer, and then discuss each of the reviews in more detail.\\n\\nFrom reading the reviews, we realize that the main contribution of the paper seems to have been obscured in the presentation - for example, due to a formulation in the beginning of the abstract (which now is changed). We do not propose a new topic model, but rather introduce a method for latent factorization in topic models. The method that we propose is general and can be adopted to many different topic models. \\n\\nSeveral tasks benefit from a factorized topic space; classification - the one we use to exemplify with in the paper - is just one. Factorized models produce interpretable latent spaces, which has been exploited in continous models for synthesis, as in [A], or for ambiguity modelling or domain transfer, as in [5] (Ek et al.), [B]. We believe the benefits of this transfer to topic models as well.\\n\\nIt would be very interesting to evaluate the benefit of a factorized topic space for a much larger range of topic models than what we do in this paper - this is beyond the scope of this paper but will definitely be pursued in a future journal version. \\n\\nIn a revised version of the paper, which is now uploaded to ArXiv, we have however added results from the SLDA model of Blei and McAuliffe, as a second baseline in the experiments, as suggested by reviewers c82a and fda8. The factorized LDA consistently performs better than both the regular LDA and SLDA.\\n\\nTo stress the focus on factorization rather than a specific classification application, we have furthermore added an experiment with video classification. Other changes, as described below, are also included in this new paper version.\\n\\nNew references (included in the new version):\\n[A]\\tA. C. Damianou, C. H. Ek, M. Titsias, and N. D. Lawrence, \\u201cManifold Relevance Determination,\\u201d International Conference on Machine Learning, 2012.\\n[B]\\tR. Navaratnam, A. W. Fitzgibbon, and R. Cipolla, \\u201cThe joint manifold model for semi-supervised multi-valued regression,\\u201d IEEE International Conference on Computer Vision, 2007.\", \"reviewer_c82a\": \"We agree with reviewer c82a that we used the word entropy in a rather sloppy manner. We have strived to make the distinction clear in the revised version.\\n\\nIn Figure 1(b), theta in the main plate is connected with another theta outside, since we use all the topics in theta to compute the entropy-like information measure for each topic theta_m. In this, we adopt a graphical notation similar to [9] (Jia et al.). This is explained more thoroughly in the revised version of the paper.\\n\\nMoreover, p(theta | kappa) is proportional to F(k) in Equation (8). In the revised version of the paper, we explicitly state the form of the proposed prior.\\n\\nFinally, as reviewer c82a clearly states, topic models do not produce state-of-the-art results for scene classification (however, they do produce state-of-the-art results in other domains, such as text). The motivation for using the current classification tasks is that we find that they provide a nice intuition into why one would want a factorized representation, which is able to model separately the 'important information' (class-dependent) and the 'unimportant information/noise' (class-independent).\", \"reviewer_232f\": \"As reviewer 232f correctly states, the class-dependent and the class-independent topics jointly encode the variations in the data. The argument is not, as reviewer 232f suggests, to throw the class-independent topics away - they are important for explaining parts of the data variation. This is not suggested anywhere in the paper. There are many motivations for learning a factorization. In the example application used in the paper, classification, the class dependent topics are the important ones. However, in a transfer learning scenario, the class-independent information is highly relevant. The manner in which factorization is used is highly application and domain specific; in this paper we exemplify one use for classification.\\n\\nAs reviewer 232f points out, using a feature that has been created for discriminative methods in a generative framework might not be particularly sensible. Our motivation for still taking this approach is to make a fair comparison to other topic models, for example, [6] (Fei-Fei and Perona).\\n\\nWe have replaced the term 'view' with 'modality' in the revised version of the paper, and also clarified the relation of our factorization method to the multi-modality methods cited in Section 2. In the literature on factorized latent variable models the word 'view' is predominantly used, but we think that 'modality' is clearer here.\", \"reviewer_fda8\": \"As reviewer fda8 points out, we could achieve the same effect by using a beta distribution instead of A in Equation (7). However, it would still require a entropy-like measurement to steer the beta distribution so as to achieve the desired factorization. \\n\\nAs described above, we have added results using SLDA, and show that the factorized LDA consistently performs better than both regular LDA and SLDA. However, we did not have time to implement other variants suggested by reviewer fda8 - this is definitely something which is interesting to do for a journal version.\"}" ] }
bI58OFtQlLOQ7
Deep Learning for Detecting Robotic Grasps
[ "Ian Lenz", "Honglak Lee", "Ashutosh Saxena" ]
In this work, we consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. We present a two-step cascaded structure, where we have two deep networks, with the top detections from the first one re-evaluated by the second one. The first deep network has fewer features, is therefore faster to run and makes more mistakes. The second network has more features and therefore gives better detections. Unlike previous works that need to design these features manually, deep learning gives us flexibility in designing such multi-step cascaded detectors.
[ "deep learning", "robotic grasps", "features", "work", "problem", "view", "scene", "objects", "cascaded structure" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=bI58OFtQlLOQ7
https://openreview.net/forum?id=bI58OFtQlLOQ7
ICLR.cc/2013/conference
2013
{ "note_id": [ "Fsg-G38UWSlUP", "Sl9E4V1iE8lfU" ], "note_type": [ "review", "review" ], "note_created": [ 1362414180000, 1362192180000 ], "note_signatures": [ [ "anonymous reviewer cf06" ], [ "anonymous reviewer b096" ] ], "structured_content_str": [ "{\"title\": \"review of Deep Learning for Detecting Robotic Grasps\", \"review\": \"This paper uses a two-pass detection mechanism with sparse autoencoders for robotic grasp detection, a new application of deep learning. The methods used are fairly standard by now (two pass and autoencoders), so the main novelty of the paper is its nice application. It shows good results, which are well presented and hold the promise of future extensions in this area.\\n\\nThe main issue I have with the paper is that it seems 'unfinished'; text wise I would have liked to see a proper conclusion and some more details on training; regarding its methods, I have the feeling this is work in its early stages.\", \"pros\": [\"novel and successful application\", \"expert implementation of deep learning\"], \"cons\": [\"'unfinished' early work\", \"this is an application paper, not a novel method (admittedly not necessarily a 'con')\"]}", "{\"title\": \"review of Deep Learning for Detecting Robotic Grasps\", \"review\": \"Summary: this paper uses the common 2-step procedure to first eliminate most of unlikely detection windows (high recall), then use a network with higher capacity for better discrimination (high precision). Deep learning (in the unsupervised sense) helps having features optimized for each of these 2 different tasks, adapt them for different situations (different robotics grippers) and beat hand-designed features for detection of graspable areas, using a mixture of inputs (depth + rgb + xyz).\", \"novelty\": \"deep learning for detection is not as uncommon as the authors suggest (pedestrians detection by [4] and imagenet 2012 detection challenge by Krizhevsky), however its application to robotics grasping detection is indeed novel. And detecting rotations (optimal grasping detection) while not completely novel is not extremely common.\", \"quality\": \"the experiments are well conducted (e.g. proper 5-fold cross validation).\", \"pros\": [\"Deep learning successfully demonstrated in a new domain.\", \"Goes beyond the simpler task of classification.\", \"Unsupervised learning itself clearly learns interesting 3D features of graspable areas versus non-graspable ones.\", \"Demonstrates superior results to hand-coded features and automatic adaptability to different grippers.\", \"The 2-pass shows improvements in quality and ~2x speedup.\"], \"cons\": [\"Even though networks are fairly small, the system is still far from realtime. Maybe explaining what the current bottlenecks are and further work would be interesting. Maybe you want to use convolutional networks to speed-up detection (no need to recompute each window's features, a lot of them are shared in a detection setting).\"]}" ] }
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results.
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
ICLR.cc/2013/conference
2013
{ "note_id": [ "JJQpYH2mRDJmM", "J04ah1kBas0qR", "suhMsqNkdKs6R", "ssJmfOuxKafV5", "cUCwU-yxtoURe", "nVC7VhbpFDnlL", "5pA7ERXu7H5uQ", "hhNRhrYspih_x" ], "note_type": [ "review", "review", "comment", "review", "review", "comment", "review", "comment" ], "note_created": [ 1363989120000, 1362229800000, 1363799700000, 1363775340000, 1362176820000, 1363773600000, 1362276900000, 1363772280000 ], "note_signatures": [ [ "Luis Gonzalo Sánchez" ], [ "anonymous reviewer 4ccd" ], [ "Luis Gonzalo Sánchez" ], [ "Luis Gonzalo Sánchez" ], [ "anonymous reviewer 2169" ], [ "Luis Gonzalo Sánchez" ], [ "anonymous reviewer 5093" ], [ "Luis Gonzalo Sánchez" ] ], "structured_content_str": [ "{\"review\": \"The newest version of the paper will appear on arXiv by Monday March 25th.\", \"in_the_mean_time_the_paper_can_be_seen_at_the_following_link\": \"\", \"https\": \"//docs.google.com/file/d/0B6IHvj9GXU3dMk1IeUNfUEpqSmc/edit?usp=sharing\"}", "{\"title\": \"review of Information Theoretic Learning with Infinitely Divisible Kernels\", \"review\": \"This paper introduces new entropy-like quantities on positive semi definite matrices. These quantities can be directly calculated from the Gram matrix of the data, and they do not require density estimation. This is an attractive property, because density estimation can be difficult in many cases. Based on this theory, the authors propose a supervised metric learning algorithm which achieves competitive results.\", \"pros\": \"The problem studied in the paper is interesting and important. The empirical results are promising.\", \"cons\": \"i) Although I believe that there are many great ideas in the paper, in my opinion the presentation of the paper needs significant improvement. It is very difficult to asses what exactly the novel contributions are in the paper, because the authors didn't separate their new results well enough from the existing results. For example, Section 3 is about infinitely divisible matrices, but I don't know what exactly the new results are in this section. \\n\\nii) The introduction and motivation could be improved as well. The main message and its importance is a bit vague to me. I recommend revising Section 1. The main motivation to design new entropy like quantities was that density estimation is difficult and we might need lots of sample points to get satisfactory results. That's true that the proposed approach doesn't require density estimation, but it is still not clear if the proposed approach works better than those algorithms that use density estimators. \\nThe empirical results seem very promising, so maybe I would emphasize them more.\\n\\niii) There are a few places in the text where the presented idea is simple, but it is presented in a complicated way and therefore it is difficult to understand. For example Section 4.1 and 4.2 seem more difficult than they should be. The definition of function F is not clear either.\\n\\niv) There are a few typos and grammatical mistakes in the paper that also need to be fixed before publication.\\nFor example, on Page 1:\", \"page_1\": \"jlearning\"}", "{\"reply\": \"This is the same comment from below, we just realized that this is the reply button for your comments.\\nDear reviewer, we appreciate the comments and the effort put into reviewing our work. We believe you have made a very valid point by asking us about the role of alpha. The order of the matrix entropy acts as an Lp norm on the eigenvalues of the Gram matrix. The larger the entropy the more emphasis on the largest eigenvalues. This behaviour translates onto our metric learning algorithm as going from multimodal very flexible class structure towards a unimodal more constrained class structure as we increase alpha. We include an example that illustrates this behaviour. With respect to HSIC, it is true that for alpha =2 the trace of K^2 shares some resemblance with the criterion. However there are several differences that makes the connection hard to establish. First, when dealing with covariance operation it has been already assumed that the mean elements have been removed (covariance operator is centred) . As we see form the introductory motivation the second order entropy is the norm of the mean vector in the RKHS. If the mean is removed this vector has zero norm. We also require our Gram matrix to have non-negative entries so that our information theoretic interpretation makes sense. We have now included comparisons with NCA.\"}", "{\"review\": \"The new version of the paper can be accessed through\", \"https\": \"//docs.google.com/file/d/0B6IHvj9GXU3dekxXMHZVdmphTXc/edit?usp=sharing\\n\\nuntil it is updated in arXiv\"}", "{\"title\": \"review of Information Theoretic Learning with Infinitely Divisible Kernels\", \"review\": \"The paper introduces a new approach to supervised metric learning. The\\nsetting is somewhat similar to the information-theoretic approach of\\nDavis et al. (2007). The main difference is that here the\\nparameterized Mahalanobis distance is tuned by optimizing a new\\ninformation-theoretical criterion, based on a matrix functional\\ninspired by Renyi's entropy. Eqs. (5), (11) and (19) and their\\nexplanations are basically enough to grasp the basic idea. In order to\\nreach the above goal, several mathematical technicalities are\\nnecessary and well developed in the paper. A key tool are infinitely\\ndivisible matrices.\\n\\n + New criterion for information-theoretic learning\\n + The mathematical development is sound\\n+/- The Renyi-inspired functional could be useful in other contexts\\n (but details remain unanswered in the paper)\\n\\n - The presentation is very technical and goes bottom-up making it\\n difficult to get the 'big picture' (which is not too complicated)\\n until Section 4 (also it's not immediately clear which parts\\n convey the essential message of the paper and which parts are just\\n technical details, for example Section 2.1 could be safely moved\\n into an appendix mentioning the result when needed).\\n\\n - Experiments show that the method works. I think this is almost\\n enough for a conference paper. Still, it would improve the paper\\n to see a clear direct comparison between this approach and\\n KL-divergence where the advantages outlined in the conclusions\\n (quote: 'The proposed quantities do not assume that the density of\\n the data has been estimated, which avoids the difficulties related\\n to it.') are really appreciated. Perhaps an experiment with\\n artificial data could be enough to complete this paper but real\\n world applications would be nice to see in the future.\", \"minor\": \"Section 2. Some undefined symbols: $M_n$, $sigma(A)$ (spectrum of A?)\", \"page_3\": \"I think you mean\\n 'where $n_1$ of the entries are 1$ -> $where $n_1$ of the entries of $mathbf{1}$ are 1$\"}", "{\"reply\": \"Thanks again for the good comments. We have worked hard on improving the presentation of the results.\", \"with_regard_to_your_cons\": \"i) We improve the presentation of the ideas by highlighting what are the contributions and why they are relevant. In section 3, where there was no clear delineation between what is know and what is new, we put our effort on explaining the reason in including some well known results since they help understanding the role of the infinite divisible kernels in computing the proposed information theoretic quantities. We provide both a graphical and textual explanation of the main ideas that can be extracted from section 3.\\nii) Section 1 was revisited and redistributed to facilitate grasping the main ideas and contributions. We tried to emphasize more on the results obtained for the application to metric learning. We also motivate the proposed quantities from the point of view of computing high order descriptors of the data based on positive definite kernels.\\niii) Section 4 was modified to convey the main result, which is the computation of the gradient of the proposed entropy, in a much simpler way. The technical details were moved to an appendix.\\niv) we took care of the typos that have been pointed out by the reviewers as well as other we found during the paper improvement.\"}", "{\"title\": \"review of Information Theoretic Learning with Infinitely Divisible Kernels\", \"review\": \"This paper proposes a new type of information measure for positive semidefinte matrices, which is essentially the logarithm of the sum of powers of eigenvalues. Several entropy-like properties are shown based on properties of spectral functions. A notion of joint entropy is then defined through Hadamard products, which leads to conditional entropies.\\n\\nThe newly defined conditional entropy is finally applied to metric learning, leading naturally to a gradient descent procedure. Experiments show that the performance of the new procedure exceeds the state of the art (e.g., LMNN).\\n\\nI did not understand the part on infinitely divisible matrices and why Theorem 3.1 leads to a link with maximum entropy.\\n\\nTo the best of my knowledge, the ideas proposed in the paper are novel. I like the approach of trying to defining measures that have similar properties than entropies without the computational burden of computing densities. However, I would have like more discussion of the effect of alpha (e.g., why alpha=1.01 in experiments? does it make a big difference to change alpha? what does it corresponds to for alpha =2, in particular in relation fo HSIC?).\", \"pros\": \"-New information measure with attractive properties\\n-Simple algorithm for metric learning\", \"cons\": \"-Lack of comparison with NCA which is another non-convex approach (J. Goldberger, S. Roweis, G. Hinton, R. Salakhutdinov. (2005) Neighbourhood Component Analysis. Advances in Neural Information Processing Systems. 17, 513-520.\\n-Too little discussion on the choice of alpha\"}", "{\"reply\": \"Thanks for the comments. We really appreciate the time you put into reviewing our paper. I agree that in the original presentation many of the main points and contributions of the paper where hard to grasp. In the new version, we have made our contributions explicit. and some of the technical exposition was modified to avoid getting lost in t details. We emphasized on the equations to and provide better motivations for the mathematical developments of each section. We agree that some of the details could be safely moved to an appendix, without compromising the relevant results.\"}" ] }
KKZ-FeUj-9kjY
Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
[ "Xanadu Halkias", "Sébastien PARIS", "Herve Glotin" ]
Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage.
[ "dbn", "deep belief networks", "digit recognition", "sparse constraints", "sparse penalty", "dbns", "approximate accuracy rates" ]
reject
https://openreview.net/pdf?id=KKZ-FeUj-9kjY
https://openreview.net/forum?id=KKZ-FeUj-9kjY
ICLR.cc/2013/conference
2013
{ "note_id": [ "ttT0L-IGxpbuw", "ijgMjq-uMOiYw", "dWSK4E1RkeWRi" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362153000000, 1362144480000, 1362193620000 ], "note_signatures": [ [ "anonymous reviewer 0136" ], [ "anonymous reviewer 61fc" ], [ "anonymous reviewer e6d4" ] ], "structured_content_str": [ "{\"title\": \"review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint\", \"review\": \"The paper proposes a mixed norm penalty for regularizing RBMs and DBNs. The work extends previous work on sparse RBMs and DBNs and extends the work of Luo et al. (2011) on sparse group RBMs (and DBMs) to deep belief nets. The method is tested on several datasets and no significant improvement is reported compared to the original DBN.\\n\\nThe paper has limited novelty as the proposed mixed-norm has already been investigated in details by Luo et al. (2011) on a RBM and DBM. Also, the original contribution is not properly referenced as it appears only in the references section but not in the text.\\n\\nIn the caption of Figure 1, it is said that hidden units will overrepresent vs. underrepresent the data. It is unclear what is exactly meant. Can this be quantified? Is this overrepresentation/underrepresentation problem intrinsic to the investigated mixed norms or is it more a question of choosing the right hyperparameters? The authors use a fixed regularization parameter for all investigated variants of the DBN. Could that be the reason for under/overrepresentation?\\n\\nThe authors are choosing three datasets that are all isolated handwritten digits recognition datasets. There are other problems such as handwritten characters (e.g. Chinese), Caltech 101 silhouettes, that also have binary representation and would be worth considering in order to assess the generality of the proposed method. Also, if the authors are targeting the handwriting recognition application, more realistic and challenging scenarios could be considered (e.g. non-isolated characters).\", \"minor_comments\": [\"The last version of the paper (v2) is not properly compiled and the citations are missing.\", \"The filters shown in Figure 1 should be made bigger.\", \"In Figure 2 and 4, x and y labels should be made bigger.\", \"Figure 2 is discussed in the caption of Figure 1.\"]}", "{\"title\": \"review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint\", \"review\": \"In this paper the authors propose a method to make the hidden units of RBM group sparse. The key idea is to add a penalty term to the negative log-likelihood loss penalizing the L2/L1 norm over the activations of the RBM. The authors demonstrate their method on three digit classification tasks. These experiments show similar accuracy to the baseline model but faster convergence.\\n\\nThere is a vast literature on sparse coding and group sparse coding and several references are missing.\", \"among_the_works_that_use_group_sparse_coding_but_not_rbms_there_are\": \"A. Hyvarinen and U. Koster. Complex cell pooling and the statistics of natural images. Network, 18(2):81\\u2013100, 2007 \\nK. Kavukcuoglu, M. Ranzato, R. Fergus, Y. LeCun. 'Learning Invariant Features through Topographic Filter Maps'. Proc. of Computer Vision and Pattern Recognition Conference (CVPR 2009), Miami, 2009\", \"while_these_are_some_works_related_to_rbm_where_sparse_features_are_grouped_in_a_similar_way_to_group_sparse_coding_methods\": \"S. Osindero, M. Welling, and G. E. Hinton. Topographic product\\nmodels applied to natural scene statistics. Neural Comp., 18:\\n344\\u2013381, 2006.\\nM. Ranzato, A. Krizhevsky, G. Hinton, 'Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images'. Proc. of the 13-th International Conference on Artificial Intelligence and Statistics (AISTATS 2010), Italy, 2010\\n\\nOverall, the novelty of the proposed method is limited. It would be sufficient if the method was well motivated and described (see more detailed comments below). The quality of the work is fair since also the empirical validation is pretty weak: comparisons are reported on three small datasets which are very similar to each other, accuracy is on par with baseline methods and only convergence time is better but this finding has not been analyzed enough to make solid conclusions.\\n\\nPROS\\n+ simple method\\n\\nCONS\\n- limited novelty\\n- the method is not well motivated (see below)\\n- missing references\\n- unconvincing empirical validation\\n- writing needs improvements (see below)\\n\\nDetailed comments\\n-- The major concern is about the proposed method in general. \\nOn the one hand it makes totally sense to add a sparsity constraint to the negative log likelihood loss. On the other hand, RBM are a probabilistic model and one wonders what this additional term means. If it is interpreted as a prior on the conditional distribution over the hidden units, how is that changing the marginal likelihood, for instance? This takes to the discussion on an alternative approach which is to wrap the group sparsity constraint into the probabilistic model itself and to maximize the likelihood of this. The above references on topographic PoT and cRBM can indeed be interpreted as extensions of RBMs to make hidden units sparse in groups.\\nA potential problem with the current formulation is that inference of the features does not take into account any sparsity (which is achieved only through learning). Overall after fine-tuning, one may expect little if any sparsity in the hidden units (which may explain why results are so similar to the baseline). \\nIn light of this, it would have been nice if the authors commented on this way to tackle the problem, advantages and disadvantages of each approach.\\nMore generally, I found very weak the motivation of this paper. The reason why sparsity and group sparsity is enforced is pretty vague and unconvincing.\\n-- The empirical validation is very weak. The three datasets are very homogeneous and results are not better than the baseline.\\nWhy is DBN so much slower? This is the strongest result of the paper in my opinion but it is not clear why that happens. \\n-- There are lots of imprecise statements. Here a few.\\nFirst, the title should be changed from 'DBN' to 'RBM'.\\nAbstract\\nThe results in the abstract '98.8' may refer to a specific dataset (MNIST?) but does not hold in general.\\n'optimize the data representation achieved by the DBN \\u2026' is vague.\\n'theoretical approach': I would not call this approach theoretical!\\nSec. 1\\n'due to their generative and unsupervised learning framework': needs to be rephrased.\\n[2, 3]: these references are not appropriate, perhaps [12, 13]?\"}", "{\"title\": \"review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint\", \"review\": \"Since the last version of the paper (v2) is incomplete my following comments are mainly based on the first version.\\n\\nThis paper proposes using $l_{1,2}$ regularization (for both non-overlapping and overlapping groups) upon the activation possibilities of hidden units in RBMs. Then DBNs pretrained by the resulting mixed Norm RBMs are applied the task of digit recognition. \\n\\nMy main concern is the mistakes in equation 16 (and 17), the core of this paper. The sign of the term of $lambda$ should be minus. There is also a missing $P(h_j=1|x^l)$ in that term. Since these mistakes could explain why the results are worse than the baseline and why the bigger non-overlapping groups (which can make the regularization term smaller) are preferred very well, I do not think they are merely typos.\"}" ] }
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
ICLR.cc/2013/conference
2013
{ "note_id": [ "obPcCcSvhKovH", "BBmMrdZA5UBaz", "SPk0N0RlUTrqv", "WilRXfhv6jXxa", "OOBjrzG_LdOEf", "w0XswsNFad7Qu", "1toZvrIP-Xvme", "ZVb9LYU20iZhX", "lWJdCuzGuRlGF" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1362369360000, 1362349140000, 1394470920000, 1361845800000, 1362085800000, 1394470920000, 1394470860000, 1362379980000, 1362101820000 ], "note_signatures": [ [ "Marc'Aurelio Ranzato" ], [ "Marc'Aurelio Ranzato" ], [ "anonymous reviewer f4a8" ], [ "anonymous reviewer 2b4c" ], [ "Ian Goodfellow" ], [ "anonymous reviewer f4a8" ], [ "anonymous reviewer f4a8" ], [ "anonymous reviewer f4a8" ], [ "anonymous reviewer cd07" ] ], "structured_content_str": [ "{\"review\": \"Another minor comment related to the visualization method: since there is no iterative 'inference' step typical of deconv. nets (the features are already given by a direct forward pass) then this method is perhaps more similar to this old paper of mine:\\nM. Ranzato, F.J. Huang, Y. Boureau, Y. LeCun, 'Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition'. Proc. of Computer Vision and Pattern Recognition Conference (CVPR 2007), Minneapolis, 2007.\", \"http\": \"//www.cs.toronto.edu/~ranzato/publications/ranzato-cvpr07.pdf\\nThe only difference being the new pooling instead of max-pooling, the use of ReLU instead of tanh and the tying of the weights (filters optimized for feature extraction but used also for reconstruction).\\n\\nOverall, I think that even this visualization method constitutes a nice contribution of this paper.\"}", "{\"review\": [\"I really like this paper because:\", \"it is simple yet very effective and\", \"the empirical validation not only demonstrates the method but it also helps understanding where the gain comes from (tab. 5 was very useful to understand the regularization effect brought by the sampling noise).\"], \"i_also_found_intriguing_the_visualization_method\": \"using deconv. nets to reverse a trained conv. net; that's clever! Maybe that can become a killer app for deconv nets. Videos are also very nice.\\nHowever, I was wondering how did you invert the normalization layer?\"}", "{\"review\": \"I apologize for the delay in my reply.\", \"verdict\": \"weak accept.\"}", "{\"title\": \"review of Stochastic Pooling for Regularization of Deep Convolutional Neural\\n Networks\", \"review\": \"This paper introduces a new regularization technique based on inexpensive approximations to model averaging, similar to dropout. As with dropout, the training procedure involves stochasticity but the trained model uses a cheap approximation to the average over all possible models to make a prediction.\\n\\nThe paper includes empirical evidence that the model averaging effect is happening, and uses the method to improve on the state of the art for three datasets.\\n\\nThe method is simple and in principle, computationally inexpensive.\", \"two_criticisms_of_this_paper\": \"-The result on CIFAR-10 was not in fact state of the art at the time of submission; it was just slightly worse than Snoek et al's result using Bayesian hyperparameter optimization.\\n-I think it's worth mentioning that while this method is computationally inexpensive in principle, it is not necessarily easy to get a fast implementation in practice. i.e., people wishing to use this method must implement their own GPU kernel to do stochastic pooling, rather than using off-the-shelf implementations of convolution and basic tensor operations like indexing.\\n\\nOtherwise, I think this is an excellent paper. My colleagues and I have made a slow implementation of the method and used it to reproduce the authors' MNIST results. The method works as advertised and is easy to use.\"}", "{\"review\": \"I'm excited about this paper because it introduces another trick for cheap model averaging like dropout. It will be interesting to see if this kind of fast model averaging turns into a whole subfield.\\n\\nI recently got some very good results ( http://arxiv.org/abs/1302.4389 ) by using a model that works well with the kinds of approximations to model averaging that dropout makes. Presumably there are models that get the same kind of synergy with stochastic pooling. I think this is a very promising prospect, since stochastic pooling works so well even with just vanilla rectifier networks as the base model.\"}", "{\"review\": \"I apologize for the delay in my reply.\", \"verdict\": \"weak accept.\"}", "{\"review\": \"I apologize for the delay in my reply.\", \"verdict\": \"weak accept.\"}", "{\"title\": \"review of Stochastic Pooling for Regularization of Deep Convolutional Neural\\n Networks\", \"review\": \"Regularization methods are critical for the successful applications of\\nneural networks. This work introduces a new dropout-inspired\\nregularization method named stochastic pooling. The method is simple,\\napplicable applicable to convolutional neural networks with positive\\nnonlinearites, and achieves good performance on several tasks.\\n\\nA potentially severe issue is that the results are no longer state of\\nthe art, as maxout networks get better results. But this does not\\nstrongly suggest that stochastic pooling is inferior to maxout, since\\nthe methods are different and can therefore be combined, and, more\\nimportantly, maxout networks may have used a more thorough\\narchitecture and hyperparameter search, which would explain their\\nbetter performance.\\n\\nThe main problem with the paper is that the experiments are lacking in\\nthat there is no proper comparison to dropout. While the results on\\nCIFAR-10 are compared to the original dropout paper and result in an\\nimprovement, the paper does not report results for the remainder of\\nthe datasets with dropout and with the same architecture (if the\\narchitecture is not the same in all experiments, then performance\\ndifferences could be caused by architecture differences). It is thus\\npossible that dropout would achieve nearly identical performance on\\nthese tasks if given the same architecture on MNIST, CIFAR-100, and\\nSVHN. What's more, when properly tweaked, dropout outperforms the\\nresults reported here on CIFAR-10 as reported in Snoek et al. [A] (sub\\n15% test error); and it is conceivable that Bayesian-optimized\\nstochastic pooling would achieve worse results.\\n \\nIn addition to dropout, it is also interesting to compare to dropout\\nthat occurs before max-pooling. This kind of dropout bears more\\nresemblance to stochastic pooling, and may achieve results that are\\nsimilar (or better -- it cannot be ruled out).\\n\\nFinally, a minor point. The paper emphasizes the fact that stochastic\\npooling averages 4^N models while dropout averages 2^N models, where N\\nis the number of units. While true, this is not relevant, since both\\nquantities are vast, and the performance differences between the two \\nmethods will stem from other sources. \\n\\nTo conclude, the paper presented an interesting and elegant technique\\nfor preventing overfitting that may become widely used. However, this\\npaper does not convincingly demonstrate its superiority over dropout. \\n\\nReferences \\n----------\\n[A] Snoek, J. and Larochelle, H. and Adams, R.P., Practical Bayesian\\nOptimization of Machine Learning Algorithms, NIPS 2012\"}", "{\"title\": \"review of Stochastic Pooling for Regularization of Deep Convolutional Neural\\n Networks\", \"review\": \"The authors introduce a stochastic pooling method in the context of\\nconvolutional neural networks, which replaces the traditionally used\\naverage or max pooling operators. In the stochastic pooling a\\nmultinomial distribution is created from input activations and used to\\nselect the index of the activation to pass to the next layer of the\\nnetwork. On first read, this method resembled that of 'probabilistic max\\npooling' by Lee et. al in 'Convolutional Deep Belief Networks for\\nScalable Unsupervised Learning of Hierarchical Representations',\\nhowever the context and execution are different.\\n\\nDuring testing, the authors employ a separate pooling function that is a\\nweighted sum of the input activations and their corresponding\\nprobabilities that would be used for index selection during training.\\nThis pooling operator is speculated to work as a form of\\nregularization through model averaging. The authors substantiate this claim with results averaging multiple samples at each pool of the stochastic architectures and visualizations of images obtained from\\nreconstructions using deconvolutional networks.\\n\\nMoreover, test set accuracies for this method are given for four\\nrelevant datasets where it appears stochastic pooling CNNs are able to\\nachieve the best known performance on three. A good amount of detail\\nhas been provided allowing the reader to reproduce the results.\\n\\nAs the sampling scheme proposed may be combined with other regularization techniques, it will be exciting to see how multiple forms of regularization can contribute or degrade test accuracies.\", \"some_minor_comments_follow\": \"- Mini-batch size for training is not mentioned.\\n\\n- Fig. 2 could be clearer on first read, e.g. if boxes were drawn\\naround (a,b,c), (e,f), and (g,h) to indicate they are operations on\\nthe same dataset.\\n\\n- In Section 4.2 it is noted that stochastic pooling avoids\\nover-fitting unlike averaging and max pooling, however in Fig. 3 it\\ncertainly appears that the average and max techniques are not severely\\nover-fitting as in the typical network training case (with noticeable\\ndegradation in test set performance). However, the network does train\\nto near zero error on the training set. It may be more accurate to state\\nthat stochastic pooling promotes better generalization yet additional training epochs may make the over-fitting argument clearer.\\n\\n- Fig. 3 also suggests that additional training may improve the final\\nreported test set error in the case of stochastic pooling. The\\nreference to state-of-the-art performance on CIFAR-10 is no longer\\ncurrent.\\n\\n- Section 4.8, sp 'proabilities'\"}" ] }
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community.
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
ICLR.cc/2013/conference
2013
{ "note_id": [ "ELp1azAY4uaYz", "bf2Dnm5t9Ubqe", "6NMO6i-9pXN8q", "OOksUbLar_UGE", "3Ms_MCOhFG34r", "ddu0ScgIDPSxi", "QDmFD7aPnX1h7", "sJxHJpdSKIJNL", "mmlAm0ZawBraS", "qX8Cq3hI2EXpf", "C8Vn84fqSG8qa" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1362415140000, 1360865940000, 1363602720000, 1363350360000, 1368188160000, 1363279380000, 1360857420000, 1363326840000, 1363279380000, 1363279380000, 1362716940000 ], "note_signatures": [ [ "anonymous reviewer 3c5e" ], [ "anonymous reviewer f5bf" ], [ "anonymous reviewer 3c5e" ], [ "anonymous reviewer 13e8" ], [ "Pontus Stenetorp" ], [ "anonymous reviewer f5bf" ], [ "anonymous reviewer 13e8" ], [ "anonymous reviewer f5bf" ], [ "anonymous reviewer f5bf" ], [ "anonymous reviewer f5bf" ], [ "Tomas Mikolov" ] ], "structured_content_str": [ "{\"title\": \"review of Efficient Estimation of Word Representations in Vector Space\", \"review\": \"This paper introduces a linear word vector learning model and shows that it performs better on a linear evaluation task than nonlinear models. While the new evaluation experiment is interesting the paper has too many issues in its current form.\\n\\nOne problem that has already been pointed out by the other reviewers is the lack of comparison and proper acknowledgment of previous models. The log-linear models have already been introduced by Mnih et al. and the averaging of context vectors (though of a larger context) has already been introduced by Huang et al. Both are cited but the model similarity is not mentioned.\\n\\nThe other main problem is that the evaluation metric clearly favors linear models since it checks for linear relationships. While it is an interesting finding that this holds for any of the models, this phenomenon does not necessarily need to lead to better performance. Other non-linear models may have encoded all this information too but not on a linear manifold. The whole new evaluation metric is just showing that linear models have more linear relationships. If this was combined with some performance increase on a real task then non-linearity for word vectors would have been convincingly questioned.\\nDo these relationships hold for even simpler models like LSA or tf-idf vectors?\", \"introduction\": \"Many very broad and general statements are made without any citations to back them up. \\nThe motivation talks about how simpler bag of words models are not sufficient anymore to make significant progress... and then the rest of the paper introduces a simpler bag of words model and argues that it's better. The intro and the first paragraph of section 3 directly contradict themselves.\\n\\nThe other motivation that is mentioned is how useful for actual tasks word vectors can be. I agree but this is not shown. This paper would have been significantly stronger if the vectors from the proposed (not so new) model would have been compared on any of the standard evaluation metrics that have been used for these words. For instance: Turian et al used NER, Huang et al used human similarity judgments, the author himself used them for language modeling. Why not show improvements on any of these tasks? \\n\\nLDA and LSA are missing citations.\\n\\nCitation [14] which is in submission seems an important paper to back up some of the unsubstantiated claims of this paper but is not available.\\n\\nThe hidden layer in Collobert et al's word vectors is usually around 100, not between 500 to 1000 as the authors write.\\n\\nSection 2.2 is impossible to follow for people not familiar with this line of work.\", \"section_4\": \"Why cosine distance? A comparison with Euclidean distance would be interesting, or should all word vectors be length-normalized?\\n\\nThe problem with synonyms in the evaluation seems somewhat important but is ignored.\\n\\nThe authors claim that their evaluation metric 'should be positively correlated with' 'certain applications'. That's yet another unsubstantiated claim that could be made much stronger with showing such a correlation on the above mentioned tasks.\\n\\nMnih is misspelled in table 3.\\n\\nThe comparisons are lacking consistency. All the models are trained on different corpora and have different dimensionality. Looking at the top 3 previous models (Mikolov 2x and Huang) there seems to be a clear correlation between vector size and overall performance. If one wants to make a convincing argument that the presented models are better, it would be important to show that using the same corpus.\\n\\nGiven that the overall accuracy is around 50%, the examples in table 5 must have been manually selected? If not, it would be great to know how they were selected.\"}", "{\"title\": \"review of Efficient Estimation of Word Representations in Vector Space\", \"review\": [\"The paper studies the problem of learning vector representations for words based on large text corpora using 'neural language models' (NLMs). These models learn a feature vector for each word in such a way, that the feature vector of the current word in a document can be predicted from the feature vectors of the words that precede (and/or succeed) that word. Whilst a number of studies have developed techniques to make the training of NLMs more efficient, scaling NLMs up to today's multi-billion-words text corpora is still a challenge.\", \"The main contribution of the paper comprises two new NLM architectures that facilitate training on massive data sets. The first model, CBOW, is essentially a standard feed-forward NLM without the intermediate projection layer (but with weight sharing + averaging before applying the non-linearity in the hidden layer). The second model, skip-gram, comprises a collection of simple feed-forward nets that predict the presence of a preceding or succeeding word from the current word. The models are trained on a massive Google News corpus, and tested on a semantic and syntactic question-answering task. The results of these experiments look promising.\", \"Whilst I think this line of research is interesting and the presented results look promising, I do have three main concerns with this paper:\", \"(1) The choice for the proposed models (CBOW and skip-gram) are not clearly motivated. The authors' only motivation appears to be for computational reasons. However, the experiments do not convincingly show that this indeed leads to performance improvements on the task at hand. In particular, the 'vanilla' NLM implementation of the authors actually gives the best performance on the syntactic question-answering task. Faster training speed is mainly useful when you can throw more data at the model, and the model can effectively learn from this new data (as the authors argue themselves in the introduction). The experiments do not convincingly show that this happens. In addition, the comparisons with the models by Collobert-Weston, Turian, Mnih, Mikolov, and Huang appear to be unfair: these models were trained on much smaller corpora. A fair experiment would re-train the models on the same data to show that they learn slower (which is the authors' hypothesis), e.g., by showing learning curves or by showing a graph that shows performance as a function of training time.\", \"(2) The description of the models that are developed is very minimal, making it hard to determine how different they are from, e.g., the models presented in [15]. It would be very helpful if the authors included some graphical representations and/or more mathematical details of their models. Given that the authors still almost have one page left, and that they use a lot of space for the (frankly, somewhat superfluous) equations for the number of parameters of each model, this should not be a problem.\", \"(3) Throughout the paper, the authors assume that the computational complexity of learning is proportional to the number of parameters in the model. However, their experimental results show that this assumption is incorrect: doubling the number of parameters in the CBOW and skip-gram models only leads to a very modest increase in training time (see Table 4).\", \"Detailed comments\", \"===============\", \"The paper contains numerous typos and small errors. Specifically, I noticed a lot of missing articles throughout the paper.\", \"'For many tasks, the amount of \\u2026 focus on more advanced techniques.' -> This appears to be a contradiction. If speech recognition performance is largely governed by the amount of data we have, than simply scaling up the basic techniques should help a lot!\", \"'solutions were proposed for avoiding it' -> For avoiding what? Computation of the full output distribution over words of length V?\", \"'multiple degrees of similarities' -> What is meant by degrees here? Different dimensions of similarity? (For instance, Fiat is like Ferrari because they're both Italian but unlike Ferrari because it's not a sports car.) Or different strengths of the similarity? (For instance, Denmark is more like Germany than like Spain.) What about the fact that semantic similarities are intransitive? (Tversky's famous example of the similarity between China and North Korea.)\", \"'Moreover, we discuss hyper-parameter selection \\u2026 millions of words in the vocabulary.' -> I fail to see the relation between hyperparameter selection and training speed. Moreover, the paper actually does not say anything about hyperparameter selection! It only states the initial learning rate is 0.025, and that is linearly decreased (but not how fast).\", \"Table 2: It appears that the performance of the CBOW model is still improving. How does it perform when D = 1000 or 2000? Why not make a learning curve here (plot performance as a function of D or of training time)?\", \"Table 3: Why is 'our NNLM' so much better than the other NNLMs? Just because it was trained on more data? What model is implemented by 'our NNLM' anyway?\", \"Tables 3 and 4: Why is the NNLM trained on 6 billion examples and the others on just 0.7 or 1.6 billion examples? The others should be faster, so easier to train on more data, right?\", \"It would be interesting if the authors could say something about how these models deal with intransitive semantic similarities, e.g., with the similarities between 'river', 'bank', and 'bailout'. People like Tversky have advocated against the use of semantic-space models like NLMs because they cannot appropriately model intransitive similarities.\", \"Instead of looking at binary question-answering performance, it may also be interesting to look whether a hitlist of answers contains the correct answer.\", \"The number of self-citations seems somewhat excessive.\", \"I tried to find reference [14] to see how it differs from the present paper, but I was not able to find it anywhere.\"]}", "{\"review\": \"It is really unfortunate that the responding author seems to care\\nsolely about every possible tweak to his model and combinations of his\\nmodels but shows a strong disregard for a proper scientific comparison\\nthat would show what's really the underlying reason for the increase\\nin accuracy on (again) his own new task. For all we know, some of the\\nword vectors and models that are being compared to in table 4 may have\\nbeen trained on datasets that didn't even include the terms used in\\nthe evaluation, or they may have been very rare in that corpus.\\nThe models compared in table 4 still all have different word vector\\nsizes and are trained on different datasets, despite the clear\\nimportance of word vector size and dataset size. Maybe the\\nhierarchical softmax on any of the existing models trained on the same\\ndataset would yield the same performance? There's no way of knowing if\\nthis paper introduced a new model that works better or just a new\\ntraining dataset (which won't be published) or just a well selected\\ncombination of existing methods.\\n\\nThe authors write that there are many obvious real tasks that their\\nword vectors should help but don't show or mention any. NER has been\\nused to compare word vectors and there are standard datasets out there\\nfor a comparison on which many people train and test. There are human\\nsimilarity judgments tasks and datasets that several word vectors have\\nbeen compared on. Again, the author seems to prefer to ignore all but\\nhis own models, dataset and tasks. It is still not clear to me what\\npart of the model gives the performance increase. Is it the top layer\\ntask or is it the averaging of word vectors. Again, averaging word\\nvectors has already been done as part of the model of Huang et al.. A\\nlink to a wikipedia article by the author is not as strong as an\\nargument as showing equations that point to the actual difference.\\n\\nAfter a discussion among the reviewers, we unanimously feel that the revised version of paper and the accompanying rebuttal do not resolve many of the issues raised by the reviewers, and many of the reviewers' questions (e.g., on which models include nonlinearities) remain unanswered.\\nFor instance, they say that the projection layer in a NNLM has no\\nnonlinearity but that was not the point, the next layer has one and\\nfrom the fuzzy definitions it seems like the proposed model does not.\\nDoes that mean we could just get rid of the non-linearity of the\\nvector averaging part of Huang's model and get the same performance?\\nLDA might be in fashion now but papers in high quality conferences are\\nsupposed to be understood in the future as well when some models may\\nnot be so obviously known anymore.\\n\\nThe figure is much less clear in describing the model than the\\nequations all three reviewers asked for.\\n\\nAgain, there is one interesting bit in here which is the new\\nevaluation metric (which may or may not be introduced in reference\\n[14] soon) and the fact that any of these models capture these\\nrelationships linearly. Unfortunately, the entire comparison to\\nprevious work (table 4 and the writing) is unscientific and sloppy.\\nFurthermore, the possibly new models are not clearly enough defined by\\ntheir equations.\\n\\nIt is generally unclear where the improvements are coming from.\\nWe hope the authors will clean up the writing and include proper\\ncomparisons for a future submission.\"}", "{\"review\": \"In light of the authors' response I'm changing my score for the paper to Weak Reject.\"}", "{\"review\": \"In response to the request for references made by the first author for the statement regarding semantic similarity being intransitive, I think the reference should be to 'Features of similarity' by Tversky (1977). Please find what I believe to be the relevant portion below.\\n\\n `We say 'the portrait resembles the person' rather than 'the person resembles the portrait.' We say 'the son resembles the father' rather than 'the father resembles the son.' We say 'an ellipse is like a circle,' not 'a circle is like an ellipse,' and we say 'North Korea is like Red China' rather than 'Red China is like North Korea.''\\n\\nLastly, a question that was raised by the reviewers was whether these relationships also hold for LSA or tf-idf vectors to which the first author responded that this has already been discussed in another paper and it turned out not to be the case. I would be very thankful for a reference to this work since I am not familiar with it.\"}", "{\"review\": \"The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.\", \"quality_rating\": \"Strong reject\", \"confidence\": \"Reviewer is knowledgeable\"}", "{\"title\": \"review of Efficient Estimation of Word Representations in Vector Space\", \"review\": \"The authors propose two log-linear language models for learning real-valued vector representations of words. The models are designed to be simple and fast and are shown to be scalable to very large datasets. The resulting word embeddings are evaluated on a number of novel word similarity tasks, on which they perform at least as well as the embeddings obtained using a much slower neural language model.\\n\\nThe paper is mostly clear and well executed. Its main contributions are a demonstration of scalability of the proposed models and a sensible protocol for evaluating word similarity information captured by such embeddings. The experimental section is convincing.\\n\\nThe log-linear language models proposed are not quite as novel or uniquely scalable as the paper seems to imply though. Models of this type were introduced in [R1] and further developed in [15] and [R2]. The idea of speeding up such models by eliminating matrix multiplication when combining the representations of context words was already implemented in [15] and [R2]. For example, the training complexity of the log-linear HLBL model from [15] is the same as that of the Continuous Bag-of-Words models. The authors should explain how the proposed log-linear models relate to the existing ones and in what ways they are superior. Note that Table 3 does contain a result obtained by an existing log-bilinear model, HLBL, which according to [18] was the model used to produce the 'Mhih NNLM' embeddings. These embeddings seem to perform considerably better then the 'Turian NNLM' embeddings obtained with a nonlinear NNLM on the same dataset, though of course not as well as the embeddings induced on much larger datasets. This result actually strengthens the authors argument for using log-linear models by suggesting that even if one could train a slow nonlinear model on the same amount of data it might not be worth it as it will not necessarily produce superior word representations.\\n\\nThe discussion of techniques for speeding up training of neural language models is incomplete, as the authors do not mention sampling-based approaches such as importance sampling [R3] and noise-contrastive estimation [R2].\\n\\nThe paper is unclear about the objective used for model selection. Was it a language-modeling objective (e.g. perplexity) or accuracy on the word similarity tasks?\\n\\nIn the interests of precision, it would be good to include the equations defining the models in the paper.\\n\\nIn Section 3, it might be clearer to say that the models are trained to 'predict' words, not 'classify' them.\\n\\nFinally, in Table 3 'Mhih NNLM' should probably read 'Mnih NNLM'.\", \"references\": \"[R1] Mnih, A., & Hinton G. (2007). Three new graphical models for statistical language modelling. ICML 2007.\\n[R2] Mnih, A., & Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models. ICML 2012.\\n[R3] Bengio, Y., & Senecal, J. S. (2008). Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4), 713-722.\"}", "{\"review\": \"The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.\", \"quality_rating\": \"Strong reject\", \"confidence\": \"Reviewer is knowledgeable\"}", "{\"review\": \"The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.\", \"quality_rating\": \"Strong reject\", \"confidence\": \"Reviewer is knowledgeable\"}", "{\"review\": \"The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.\", \"quality_rating\": \"Strong reject\", \"confidence\": \"Reviewer is knowledgeable\"}", "{\"review\": [\"We have updated the paper (new version will be visible on Monday):\", \"added new results with comparison of models trained on the same data with the same dimensionality of the word vectors\", \"additional comparison on a task that was used previously for comparison of word vectors\", \"added citations, more discussion about the prior work\", \"new results with parallel training of the models on many machines\", \"new state of the art result on Microsoft Research Sentence Completion Challenge, using combination of RNNLMs and Skip-gram\", \"published the test set\", \"We welcome discussion about the paper. The main contribution (that seems to have been missed by some of the reviews) is that we can use very shallow models to compute good vector representation of words. This can be very efficient, compared to currently popular model architectures.\", \"As we are very interested in the deep learning, we are also interested in how this term is being used. Unfortunately, there is an increasing amount of work that tries to associate itself with deep learning, although it has nothing to do with it. According to Bengio+LeCun's paper 'Scaling learning algorithms towards AI', deep architectures should be capable of representing and learning complex functions, composed of simpler functions. The complex functions at the same time cannot be efficiently represented and learned by shallow architectures (basically those that have only 1 or 0 non-linearities). Thus, any paper that claims to be about 'deep learning' should first prove that the given performance cannot be achieved with a shallow model. This has been already shown for deep neural networks for speech recognition and vision problems (one hidden layer is not enough to reach the same performance that more hidden layers can achieve). However, when it comes to NLP, the only such result known to me are the Recurrent neural networks, that have been shown to outperform shallow feed-forward networks on some tasks in language modeling.\", \"When it comes to learning continuous representations of words, such thorough comparison is missing. In our current paper, we actually show that there might be nothing deep about the continuous word vectors - one cannot simply add few hidden layers and label some technique 'deep' to gain attraction. Correct comparison with shallow techniques is necessary.\", \"Hopefully, our paper will improve common understanding of what deep learning is about, and will help to keep the track towards the original goals. We did not write our opinion directly in the paper, as we believe it belongs more to the discussion part of the conference, where people can react to our claims.\"], \"detailed_responses_are_below\": \"\", \"reviewer_anonymous_13e8\": \"The log-linear language models proposed are not quite as novel or uniquely scalable as the paper seems to imply though. Models of this type were introduced in [R1] and further developed in [15] and [R2].\\n\\n- We have added the citations and some discussion; however, note that we directly follow model architecture proposed earlier, in 'T. Mikolov. Language Modeling for Speech Recognition in Czech, Masters thesis, Brno University of Technology, 2007.', plus the hierarchical softmax proposed in 'F. Morin, Y. Bengio. Hierarchical Probabilistic Neural Network Language Model. AISTATS, 2005.'; the novelty of our current approach is in the new architectures that work significantly better than the previous ones (we have added this comparison in the new version of the paper), and the Huffman tree based hierarchical softmax.\\n\\n\\nFor example, the training complexity of the log-linear HLBL model from [15] is the same as that of the Continuous Bag-of-Words models\\n\\n- Assuming one will use diagonal weight matrices as is mentioned in [15], the computational complexity will be similar. We have added this information to the paper. Our proposed architectures are however easier to implementation than HLBL, and also it does not seem that we would obtain better vectors with HLBL (just by looking at the table with results - HLBL seems to have performance close to NNLM, ie. does not capture the semantic regularities in words as well as the Skip-gram). Moreover, I was confused about computational complexity of the hierarchical log-bilinear model: in [R2], it is reported that training time for model with 100 hidden units on the Penn Treebank setup is 1.5 hours; for our CBOW model it is a few seconds. So I don't know if author uses the diagonal weight matrices always or not.\\n\\nAdditionally, the perplexity results in [R2] are rather weak, even worse than simple trigram model. My explanation of HLBL performance is this: the model does not have non-linearities, thus, it cannot model N-grams. An example of such feature is 'if word X and Y occurred after each other, predict word Z'; the linear model can only represent features such as 'X predicts Z, Y predicts Z'. This means that the HLBL language model will probably not scale up well to large data sets, as it can model only patterns such as bigram, skip-1-bigram, skip-2-bigram etc. (and will thus behave slightly as a cache model, and will improve with longer context - which was actually observed in [R1] and [15]). Also note that the comparison in [R1] with NNLM is flawed, as the result from (Bengio, 2003) is from model that was small and not fully trained (due to computational complexity).\\n\\nTo conclude, the HLBL is very interesting model by itself, but we have chosen simpler architecture that follows our earlier work that aims to solve simpler problem - we do not try to learn a language model, just the word vectors. Detailed discussion about HLBL is out of scope of our current paper.\\n\\n\\nThe discussion of techniques for speeding up training of neural language models is incomplete, as the authors do not mention sampling-based approaches such as importance sampling [R3] and noise-contrastive estimation [R2].\\n\\n- As our paper is already quite long, we do not plan to discuss speedup techniques that we did not use in our work. It can be a topic for future work.\\n\\n\\nThe paper is unclear about the objective used for model selection. Was it a language-modeling objective (e.g. perplexity) or accuracy on the word similarity tasks?\\n\\n- The cost function that we try to minimize during training is the usual one (cross-entropy), however we choose the best models based on the performance on the word similarity task.\\n\\n\\nIn the interests of precision, it would be good to include the equations defining the models in the paper.\\n\\n- Unfortunately the paper is already too long, so we just refer to prior work where similar models are properly defined. If we will extend the paper in the future, we will add the equations.\", \"reviewer_anonymous_f5bf\": \"Concern (1): We added a table with comparison of models trained on the same data. The results strongly support our previous claims (we had some of these results already before the first version of the paper was submitted, but due to lack of time these did not appear in the paper).\\n\\n(2): We added a Figure that illustrates the topology of the models, and kept the equations as we consider them important.\\n\\n(3): No, see the equations and Table 4.\\n\\n\\nThe paper contains numerous typos and small errors. Specifically, I noticed a lot of missing articles throughout the paper.\\n\\n- We hope that small errors and missing articles are not the most important issue in research papers.\\n\\n\\n'For many tasks, the amount of \\u2026 focus on more advanced techniques.'\\n\\n- The introduction was updated.\\n\\n\\nWhat about the fact that semantic similarities are intransitive? (Tversky's famous example of the similarity between China and North Korea.)\\n\\n- We are not aware of famous example of Tversky. Please provide reference.\\n\\n\\n'Moreover, we discuss hyper-parameter selection \\u2026 millions of words in the vocabulary.' -> I fail to see the relation between hyperparameter selection and training speed. Moreover, the paper actually does not say anything about hyperparameter selection! It only states the initial learning rate is 0.025, and that is linearly decreased (but not how fast).\\n\\n- Note that structure and size of the model is also hyper-parameter, as well as fraction of used training data; it is not just the learning rate. However, we simplified the text in the paper.\", \"table_2\": \"It appears that the performance of the CBOW model is still improving. How does it perform when D = 1000 or 2000? Why not make a learning curve here (plot performance as a function of D or of training time)?\\n\\n- That is an interesting experiment that we actually performed, but it would not fit into the paper.\", \"table_3\": \"Why is 'our NNLM' so much better than the other NNLMs? Just because it was trained on more data? What model is implemented by 'our NNLM' anyway?\\n\\n- Because it was trained in parallel using hundreds of CPUs. It is a feedforward NNLM.\", \"tables_3_and_4\": \"Why is the NNLM trained on 6 billion examples and the others on just 0.7 or 1.6 billion examples? The others should be faster, so easier to train on more data, right?\\n\\n- We did not have these numbers during submission of the paper, but these results were added to the actual version of the paper. The new model architectures are faster for training than NNLM, and provide better results in our word similarity tasks.\\n\\n\\nIt would be interesting if the authors could say something about how these models deal with intransitive semantic similarities, e.g., with the similarities between 'river', 'bank', and 'bailout'. People like Tversky have advocated against the use of semantic-space models like NLMs because they cannot appropriately model intransitive similarities.\\n\\n- We are not aware of Tversky's arguments.\\n\\n\\nInstead of looking at binary question-answering performance, it may also be interesting to look whether a hitlist of answers contains the correct answer.\\n\\n- We performed this experiment as well; of course, top-5 accuracy is much better than top-1. However, it would be confusing to add these results into the paper (too many numbers).\\n\\n\\nThe number of self-citations seems somewhat excessive.\\n\\n- We added more citations.\\n\\n\\nI tried to find reference [14] to see how it differs from the present paper, but I was not able to find it anywhere.\\n\\n- This paper should become available on-line soon.\", \"reviewer_anonymous_3c5e\": \"One problem that has already been pointed out by the other reviewers is the lack of comparison and proper acknowledgment of previous models. The log-linear models have already been introduced by Mnih et al. and the averaging of context vectors (though of a larger context) has already been introduced by Huang et al. Both are cited but the model similarity is not mentioned.\\n\\n- As we explained earlier, we followed our own work that was published before these papers. We aim to learn word vectors, not language models. Note also that log-linear models and the bag-of-words representation are both very general and well known concepts, not unique to neural network language modeling. Also, Mnih introduced log-bilinear language model, not log-linear models - please read: http://en.wikipedia.org/wiki/Log-linear_model\", \"and_http\": \"//en.wikipedia.org/wiki/Bag-of-words_model\\n\\n\\nThe other main problem is that the evaluation metric clearly favors linear models since it checks for linear relationships. While it is an interesting finding that this holds for any of the models, this phenomenon does not necessarily need to lead to better performance. Other non-linear models may have encoded all this information too but not on a linear manifold. The whole new evaluation metric is just showing that linear models have more linear relationships. If this was combined with some performance increase on a real task then non-linearity for word vectors would have been convincingly questioned.\\n\\n- Note that projection layer in NNLM also does not have any non-linearity; Mnih's HLBL model does not have any non-linearity even in the hidden layer. We added more results in the paper, however can you be more specific what 'real task' means? The tasks we used are perfectly valid for a wide range of applications.\\n\\n\\nDo these relationships hold for even simpler models like LSA or tf-idf vectors?\\n\\n- This is discussed in another paper. In general, linear operations do not work well for LSA vectors.\\n\\n\\nMany very broad and general statements are made without any citations to back them up. \\n\\n- Please be more specific.\\n\\n\\nThe motivation talks about how simpler bag of words models are not sufficient anymore to make significant progress... and then the rest of the paper introduces a simpler bag of words model and argues that it's better. The intro and the first paragraph of section 3 directly contradict themselves.\\n\\n- This part of the paper was rewritten. However, N-gram models are mentioned in the introduction; not bag-of-words models. Also note that the paper is about computationally efficient continuous representations of words. We do not introduce simple bag of words model, but log-linear model with distributed representations of bag-of-words features (in case of CBOW model).\\n\\n\\nThe other motivation that is mentioned is how useful for actual tasks word vectors can be. I agree but this is not shown. This paper would have been significantly stronger if the vectors from the proposed (not so new) model would have been compared on any of the standard evaluation metrics that have been used for these words. For instance: Turian et al used NER, Huang et al used human similarity judgments, the author himself used them for language modeling. Why not show improvements on any of these tasks? \\n\\n- We believe that our task is very interesting by itself. The applications are very straightforward.\\n\\n\\nLDA and LSA are missing citations.\\n\\n- We are not using LDA nor LSA in our paper. Moreover, these concepts are generally very well known.\\n\\n\\nThe hidden layer in Collobert et al's word vectors is usually around 100, not between 500 to 1000 as the authors write.\\n\\n- We do not claim that hidden layer in Collobert et al's word vectors is usually between 500-1000. We actually point out that 50 and 100-dimensional word vectors have insufficient capacity, and the same holds for size of the hidden layer. The 500 - 2000 dimensional hidden layers are mentioned for NNLMs. We also provide reference to our prior paper that shows empirically that you have to use more than 100 neurons in the hidden layer, unless your training data is tiny ('Strategies for training large scale neural network language models').\\n\\n\\nSection 2.2 is impossible to follow for people not familiar with this line of work.\\n\\n- This section is not crucial for understanding of the paper. However, if you are interested in this part, we provided several references for that work.\\n\\n\\nWhy cosine distance? A comparison with Euclidean distance would be interesting, or should all word vectors be length-normalized?\\n\\n- We use normalized word vectors. Empirically, this works better.\\n\\n\\nThe authors claim that their evaluation metric 'should be positively correlated with' 'certain applications'. That's yet another unsubstantiated claim that could be made much stronger with showing such a correlation on the above mentioned tasks.\\n\\n- While we have also results on another tasks, the point of this paper is not to describe all possible applications, but to introduce techniques for efficient estimation of word vectors from large amounts of data.\\n\\n\\nThe comparisons are lacking consistency. All the models are trained on different corpora and have different dimensionality. Looking at the top 3 previous models (Mikolov 2x and Huang) there seems to be a clear correlation between vector size and overall performance. If one wants to make a convincing argument that the presented models are better, it would be important to show that using the same corpus.\\n\\n- Such comparison was added to the new version of the paper.\\n\\n\\nGiven that the overall accuracy is around 50%, the examples in table 5 must have been manually selected? If not, it would be great to know how they were selected.\\n\\n- Maybe this will sound surprising, but examples in Table 5 have accuracy only about 60%. We did choose several easy examples from our Semantic-Syntactic test set (so that it would be easy to judge correctness for the readers), and some manually by trying out what the vectors can represent. Note that we did not simply hand-pick the best examples; this is the real performance.\"}" ] }
zzy0H3ZbWiHsS
Audio Artist Identification by Deep Neural Network
[ "胡振", "Kun Fu", "Changshui Zhang" ]
Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), organizer of the MIREX, drives researchers in the MIR field to develop more advanced system to fulfill the tasks. One of the important tasks is the Audio Artist Identification task, or the AAI task. We implemented a Deep Belief Network (DBN) to identify the artist by audio signal. As a matter of copyright, IMIRSEL didn't publish there data set and we had to construct our own. In our data set we got an accuracy of 69.87% without carefully choosing parameters while the best result reported on MIREX is 69.70%. We think our method is promising and we want to discuss with others.
[ "mirex", "important tasks", "imirsel", "audio artist identification", "deep neural network", "great contributions", "music information retrieval", "mir" ]
reject
https://openreview.net/pdf?id=zzy0H3ZbWiHsS
https://openreview.net/forum?id=zzy0H3ZbWiHsS
ICLR.cc/2013/conference
2013
{ "note_id": [ "Zg8fgYb5dAUiY", "obqUAuHWC9mWc", "k3fr32tl6qARo", "qbjSYWhow-bDl" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362479820000, 1362137160000, 1362226800000, 1362725700000 ], "note_signatures": [ [ "anonymous reviewer 589d" ], [ "anonymous reviewer 8eb9" ], [ "anonymous reviewer b7e1" ], [ "胡振" ] ], "structured_content_str": [ "{\"title\": \"review of Audio Artist Identification by Deep Neural Network\", \"review\": \"A brief summary of the paper\\u2019s contributions. In the context of prior work:\\nThis paper builds a hybrid model based on Deep Belief Network (DBN) and Stacked Denoising Autoencoder (SDA) and applies it to Audio Artist Identification (AAI) task. Specifically, the proposed model is constructed with a two-layer SDA in the lower layers, a two-layer DBN in the middle, and a logistic regression classification layer on the top. The proposed model seems to achieve good classification performance.\", \"an_assessment_of_novelty_and_quality\": \"The paper proposes a hybrid deep network by stacking denoising autoencoders and RBMs.\\nAlthough this may be a new way of building a deep network, it seems to be a minor modification of the standard methods. Therefore, the novelty seems to be limited.\\n\\nMore importantly, motivation or justification about hybrid architecture is not clearly presented. Without a clear motivation or justification, this method doesn\\u2019t seem to be technically interesting. To make a fair comparison to other baseline methods, the SDA2-DBN2 should be compared to DBN4 or SDA4, but there are no such comparisons.\\n\\nAlthough the classification performance by the proposed method is good, the results are not directly comparable to other work in the literature. It will be helpful to apply some widely used methods in authors\\u2019 data set as additional control experiments;\\n\\nThe paper isn\\u2019t well polished. There are many awkward sentences and grammatical errors.\", \"other_comments\": \"Figure 2 is anecdotal and is not convincing enough.\\n\\nAuthors use some non-standard terminology. For example, what does \\u201cMAP paradigm\\u201d mean?\\n\\nIn Table 3, rows corresponding to \\u201c#DA layers\\u201d, \\u201c#RBM layers\\u201d, \\u201c#logistic layers\\u201d are unnecessary.\\n\\n\\n\\nA list of pros and cons (reasons to accept/reject)\", \"pros\": [\"Literature review seems fine.\", \"good (but incomplete) empirical classification results\"], \"cons\": [\"lack of clear motivation or justification of the hybrid method; lack of proper control experiments\", \"the results are not comparable to other published work\", \"unpolished writing (lots of awkward sentences and grammatical errors).\"]}", "{\"title\": \"review of Audio Artist Identification by Deep Neural Network\", \"review\": \"This paper present an application of an hybrid deep learning model to the task of audio artist identification.\", \"novelty\": [\"The novelty of the paper comes from using an hybrid unsupervised learning approach by stacking Denoising Auto-Encoders (DA) and Restricted Boltzman Machines (RBM).\", \"= Another minor novelty is the application of deep learning to artist identification. However, deep learning has already been applied to similar tasks before such as genre recognition and automatic tag annotation.\", \"Unfortunately, I found that the major contributions of the paper are not exposed clearly enough in the introduction.\"], \"quality_of_presentation\": [\"The quality of the presentation leaves to be desired. A more careful proofreading would have been required. There are several sentences with gramatical errors. Several verbs or adjectives are wrong. The writing style is also sometimes inadequate for a scientific paper (ex. 'we will review some fantastic work', 'we can build many outstanding networks'). The quality of the english is, in general, inadequate.\", \"The abstract does not present in a relevant and concise manner the essential points of the paper.\", \"Also, there is a bit of confusion in between the introduction and related work sections, as most of the introduction is also about related work.\"], \"reference_to_previous_work\": [\"Previous related work coverage is good. Previous work in deep learning and its applications in MIR, as well as work in audio artist identification are well covered.\", \"In the beginning of section 5: 'It's known that Bach, Beethoven and Brahms, known as the three Bs, shared some style when they wrote their composition.' I find this claim, without any reference, hard to understand. Bach, Beethoven and Brahms are from 3 different musical eras. How are these 3 composers more related than the others?\", \"Quality of the research.\", \"Although the idea of using a hybrid deep learning system might be new, no justification as to why such a system should work better is presented in the paper.\", \"In the experiments, the authors compare the hybrid model to pure models. However, the pure models all have less layers than the hybrid model. Why didn't the authors compare same-depth models? I feel it would have made a much stronger point.\", \"Although the authors describe in details the theory behind SDAs and DBNs, there is little to no detail about the hyper-parameters used in the actual model (number of hidden units, number of unsupervised epochs, regularization, etc.). How was the data corrupted for the DA? White Noise, or random flipped bits? How many steps in the CD? These details would be important to reproduce the results.\", \"In the beginning of section 3 and 6, the authors mention that they think their model will project the data into a semantic space which is very sparse. How is your model learning a sparse representation? Have you used sparseness constraints in your training? If so, there is no mention of it in the paper.\"]}", "{\"title\": \"review of Audio Artist Identification by Deep Neural Network\", \"review\": \"This paper describes work to collect a new dataset with music from 11 classical composers for the task of audio composer identification (although the title, abstract, and introduction use the phrase 'audio artist identification' which is a different task). It describes experiments training a few different deep neural networks to perform this classification task.\\n\\nThe paper is not very novel. It describes existing deep architectures applied to a new version of an existing dataset for an existing task.\\n\\nThe quality of the paper is not very high. The comparisons of the models were not systematic and because it is a new dataset, they cannot be compared directly to results on other datasets of existing models. There are very few specifics given about the models used (layer sizes, cost functions, input feature types, specific input features).\\n\\nThe use of mel frequency spectrum seems dubious for this task. What distinguishes classical works from different composers is generally harmonic and melodic content, which mel frequency spectrum ignores almost entirely.\\n\\nFew details are given about the make-up of the new dataset. Are these orchestral pieces, chamber pieces, concertos, piano pieces, etc? How many clips came from each piece? How many clips came from each movement? The use of clips from different movements of the same piece in the training and test sets might account for the increase in accuracy scores relative to previous MIREX results. Movements from the same piece generally share many characteristics like recording conditions, production, instrumentation, and timbre, which are the main characteristics captured by mel frequency spectrum. They also generally share harmonic and melodic content.\\n\\nAnd finally, the 'Three B's' that the authors refer to, Bach, Beethoven, and Brahms, are very different composers from different musical eras. Their works should not be easily confused with each other, and so the fact that the proposed algorithm does confuse them is concerning. Potentially it indicates the weakness of the mel spectrum for performing this task.\", \"pros\": [\"Literary presentation of the paper is high (although there are a number of strange word substitutions)\", \"Decent summary of existing work\", \"New dataset might be useful, if it is made public, although it is pretty small\"], \"cons\": [\"Little novelty\", \"Un-systematic comparisons of systems\", \"Features don't make much sense\", \"Few details on actual systems compared and on the dataset\", \"Few generalizable conclusions\"]}", "{\"review\": \"Thank you. We will revise our paper as soon as possible.\\n\\nZhen\"}" ] }
7IOAIAx1AiEYC
Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients
[ "Tom Schaul", "Yann LeCun" ]
Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.
[ "rates", "sparse", "parallelization", "stochastic", "adaptive learning rates", "gradients adaptive", "gradients recent work", "successful framework", "stochastic gradient descent", "sgd" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=7IOAIAx1AiEYC
https://openreview.net/forum?id=7IOAIAx1AiEYC
ICLR.cc/2013/conference
2013
{ "note_id": [ "UUYiUZMOiCjl1", "hhgfZq1Yf5hzr", "_VZcVNP2cvtGj", "_5dVjqxuVf560" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362388500000, 1362001560000, 1362529800000, 1361565480000 ], "note_signatures": [ [ "anonymous reviewer 7b8e" ], [ "anonymous reviewer 7318" ], [ "Tom Schaul, Yann LeCun" ], [ "anonymous reviewer 0321" ] ], "structured_content_str": [ "{\"title\": \"review of Adaptive learning rates and parallelization for stochastic, sparse,\\n non-smooth gradients\", \"review\": \"This is a paper that builds up on the adaptive learning rate scheme proposed in [1], for choosing learning rate when optimizing a neural network.\\n\\nThe first result (eq. 3) is that of figuring out an optimal learning rate schedule for a given mini-batch size n (a very realistic scenario, when one cannot adapt the size of the mini-batch during training because of computational and architectural constraints).\\n\\nThe second interesting result is that of setting the learning rates in those cases where one has sparse gradients (rectified linear units etc) -- this results in an effective rescaling of the rates by the number of non-zero elements in a given minibatch.\\n\\nThe third nice result is the observation that in a sparse situation the gradient update directions are mostly orthogonal. Taking this intuition to the logical conclusion, the authors thus induce a re-weighing scheme that essentially encourages the gradient updates to be orthogonal to each other (by weighing them proportionally to 1/number of times they interfere with each other). While the authors claim that this can be computationally expensive generally speaking, for problems of realistic sizes (d is in the tens of millions and n is a few dozen examples), this can be quite interesting.\\n\\nThe final interesting result is that of adapting the curvature estimation to the fact that with the advent of rectified linear units we are often faced with optimizing non-smooth loss functions. The authors propose a method that is based on finite differences (with some robustness improvements) and is vaguely similar to what is done in SGD-QN.\\n\\nGenerally this is a very well-written paper that proposes a few sensible and relatively easy to implement ideas for adaptive learning rate schemes. I expect researchers in the field to find these ideas valuable. One disappointing aspect of the paper is the lack of real-world results on things other than simulated (and known) loss functions.\"}", "{\"title\": \"review of Adaptive learning rates and parallelization for stochastic, sparse,\\n non-smooth gradients\", \"review\": \"summary:\\nThe paper proposes a new variant of stochastic gradient descent that is fully automated (no\\nhyper-parameter to tune) and is robust to various scenarios, including mini-batches,\\nsparsity, and non-smooth gradients. It relies on an adaptive learning rate that takes\\ninto account a moving average of the Hessian. The result is a single algorithm that takes about 4x\\nmemory (with respect to the size of the model) and is easy to implement.\\nThe algorithm is tested on purely artificial tasks, as a proof of concept.\\n\\nreview.\\n- The paper relies on some previous algorithm (bbprop) that is not provided here and only\\nexplained briefly on page 5, while first used on page 2. It would have been nice to provide\\nmore information about it earlier.\\n\\n- The 'parallelization trick' using mini-batches is good for a single-machine approach, where\\none can use multiple cores, but is thus limited by the number of cores. Also, how would\\nthis 'interfere' with Hogwild type of updates, which also uses efficiently multi-core approaches\\nfor SGD?\\n\\n- Obviously, results on real large datasets would have been welcomed (I do think experiments\\non artificial datasets are very useful as well, but they may hide the fact that we have not\\nfully understood the complexity of real datasets).\"}", "{\"review\": \"We thank the reviewers for their constructive comments. We'll try to clarify a few points they bring up:\", \"parallelization\": \"The batchsize-aware adaptive learning rates (equation 3) are applicable independently of how the minibatches are computed, whether on a multi-core machine, or across multiple machines. They are in fact complementary to the asynchronous updates of Hogwild, in that they remove its need for tuning learning rate ('gamma') and learning rate decay ('beta').\", \"bbprop\": \"The original version of vSGD (presented in [1]) does indeed require the 'bbprop' algorithm as one of its components to estimate element-wise curvature. One of the main points of this paper, however, is to replace it by a less brittle approach, based on finite-differences (section 5).\", \"large_scale_experiments\": \"We conduced a broad range of such experiments in the precursor paper [1] which demonstrated that the performance of the adaptive learning rates does correspond to the best-tuned SGD. Under the assumption that curvature does not change too fast, the original vSGD (using bbprop) and the one presented here (using finite differences) are equivalent, so those results are still valid -- but for more difficult (non-smooth) learning problems the new variant should be much more robust.\\n\\nWe'd also like to point out that an open-source implementation is now available at\", \"http\": \"//github.com/schaul/py-optim/blob/master/PyOptim/algorithms/vsgd.py\"}", "{\"title\": \"review of Adaptive learning rates and parallelization for stochastic, sparse,\\n non-smooth gradients\", \"review\": \"This is a followup paper for reference [1] which describes a parameter free adaptive method to set learning rates for SGD. This submission cannot be read without first reading [1]. It expands the work in several directions: the impact of minibatches, the impact of sparsity and gradient orthonormality, and the use of finite difference techniques to approximate curvature. The proposed methods are justified with simple theoretical considerations under simplifying assumptions and with serious empirical studies. I believe that these results are useful.\\n\\nOn the other hand, an opportunity has been lost to write a more substantial self-contained paper. As it stands, the submission reads like three incremental contributions stappled together.\"}" ] }
6elK6-b28q62g
Behavior Pattern Recognition using A New Representation Model
[ "Eric qiao", "Peter A. Beling" ]
We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems.
[ "irl", "agents", "basis", "mdp", "reward functions", "behavior pattern recognition", "new representation model", "use", "inverse reinforcement learning" ]
reject
https://openreview.net/pdf?id=6elK6-b28q62g
https://openreview.net/forum?id=6elK6-b28q62g
ICLR.cc/2013/conference
2013
{ "note_id": [ "zkxNBUsiN6B38", "KK9P-lgBP7-mW", "N6tX5S-nXZNbo", "PPs3ZO_pnzZTb", "kA2a1ywTaHAT3" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1363763280000, 1362703740000, 1363762920000, 1362473880000, 1362418320000 ], "note_signatures": [ [ "Eric qiao" ], [ "anonymous reviewer 8f06" ], [ "Eric qiao" ], [ "anonymous reviewer 08b2" ], [ "anonymous reviewer 698b" ] ], "structured_content_str": [ "{\"review\": \"Based on the reviews, a revised version will be updated on arXiv tonight. Thanks.\"}", "{\"title\": \"review of Behavior Pattern Recognition using A New Representation Model\", \"review\": \"Summary:\\nThe paper presents an approach to activity recognition based on inverse reinforcement learning. It proposes an IRL algorithm based on Gaussian Processes. Evaluation is presented for classification and clustering of behavior in a grid-world problem and the secretaries problem.\", \"comments\": \"The problem called here 'behavior pattern recognition' is very actively studied currently under the name 'activity recognition', using both unsupervised and supervised methods, some quite sophisticated. See:\", \"http\": \"//en.wikipedia.org/wiki/Activity_recognition\\nand references therein. You should clarify why you need a new term here, if somehow the problem you propose here is different. Based on its definition, it does not seem to be any different. \\nMoreover, this problem has also been studied in reinforcement learning in the context of learning by demonstration. See the recent work of George Konidaris, eg:\\nG.D. Konidaris, S.R. Kuindersma, R.A. Grupen and A.G. Barto. Robot Learning from Demonstration by Constructing Skill Trees. The International Journal of Robotics Research 31(3), pages 360-375, March 2012. \\nAndrew Thomaz, eg:\\nL. C. Cobo, C.L. Isbell, and A.L. Thomaz. 'Automatic task decomposition and state abstraction from demonstration.' In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2012. \\nThese papers use a significantly more advanced setup, in which trajectories also need to be segmented into activities (which is more realistic). Classification or unsupervised learning could used on top of their output as well. \\nThe paper should discuss the proposed approach in a broader context.\\n\\nThe experiments presented in the paper are quite simplistic and need to be improved. Specifically, in the grid-world case, the data is generated exactly according to the paradigm for which the algorithm was designed. Also, states are fully observable. What happens if you have more classes? Partial observability, so the environment is not really an MDP in the observations? Also, the FT and FE competitor methods are quite simplistic, one would expect that better way of encoding the trajectory (e.g using PCA or other forms of dimensionality reduction) would work better. \\n\\nNote that comparing against the other IRL methods for this task is tricky, because they are designed to recover a reward function that can be then used to train an RL agent, not a reward function which can be used to recognize future behavior. These are different goals. Since many reward functions can generate the same behavior, but some will make different behaviors easier to recognize than others, the paper should emphasize which of the algorithmic choices here are specifically designed to help the recognition.\\n\\nFor the secretaries problem, classification results should also be included. The description of the problem is very brief and makes it hard to tell how difficult the problem is (Fig 3a seems to suggest it's not that hard). \\n\\nIncluding a more realistic domain, where activities change during a trajectory, would make the paper a lot more convincing.\\n\\nFrom a writing point of view, there are many small grammar mistakes, especially using 'the' and 'a' and the paper requires a careful proofreading. Also, the experimental description should specify all detail necessary, eg value of hyper-parameters for the GP and describe how these have been/can be chosen. Running times would also be useful to include, as well as error bars on the graphs.\", \"pros\": [\"IRL would be useful to use in this setting, and the proposed approach makes sense\"], \"cons\": [\"Related references are omitted or not discussed\", \"Novelty of the proposed approach is low\", \"The experiments are very limited and simplistic\"]}", "{\"review\": \"To Reviewer 698b.\\n---------------------\", \"response\": \"The algorithm on page 5 mentioned how to optimize the hyper-parameters for the GP.\", \"minor\": \"The conclusions are not well grounded in the current work -- what data make the authors think that this method would be even more superior in real data?\"}", "{\"title\": \"review of Behavior Pattern Recognition using A New Representation Model\", \"review\": \"This paper proposes a behavior pattern recognition framework that re-represents the problem of classifying behavior trajectories as a problem of classifying reward functions instead. Since the reward function of the agent that is classified is not known, it is inferred using inverse reinforcement learning (IRL). Comparison of the proposed method to standard trajectory classification methods shows that the former performs much better that the latter on a grid task and to a lesser extent (but still better) on an optimal stopping problem (the secretary problem).\\n\\nThe novelty here is not in the classification or IRL algorithms, but rather in the idea that it is better to classify reward functions than observed state-action sequences. The real test case for this proposal, I think, is a case in which agents differ in their behavioral strategy, but not in the (real) reward function which they were working for. It might be that in that case the proposed method would still excel as the inferred reward function would be different for those with different strategies -- and this would be a very nice demonstration. For instance, if I prefer to go to the goal using the scenic route, and someone else takes the route with less traffic (but a lower speed limit), we might both reach the destination at the same time, thus maximizing some external reward function correctly, but IRL might infer that I assign reward to scenery and the other person to not having to compete with other cars on the road. \\n\\nUnfortunately, such a scenario was not tested in the paper. The gridworld task involved two classes of agents that differed only in their (true) reward function, not their strategies. In that case it seems obvious that classifying based on reward functions would be a good idea (it was still nice to see that the proposed method does very well even with very short trajectories --- I am not saying there was no merit to the experiments shown, just that this was not the strongest test case for the proposed framework). \\n\\nThe secretary problem was first tested on three different strategies that achieve the same goal. This is exactly the interesting scenario. Disappointingly, though, these results were not described in detail or shown (last paragraph on page 8 -- I don't see any details about the results of this experiment). Instead, the authors show results for a different experiment in which all agents had the same strategy but differed in the cutoff rule (which is akin to a reward function), as well as an experiment comparing a heuristic strategy to a random one. In both cases these are not the interesting test cases. (As an aside, I also found Figure 3 which describes these results unclear: how was reward defined for these simulations? what are the axes in the different subplots?)\", \"minor\": \"The conclusions are not well grounded in the current work -- what data make the authors think that this method would be even more superior in real data?\"}", "{\"title\": \"review of Behavior Pattern Recognition using A New Representation Model\", \"review\": \"I am not a huge expert in reinforcement learning but nonetheless I have to say this paper is quite confusing to me. I had a hard time understanding the point. Moreover, I think the topic of this paper has nothing to do whatsoever with the interests of this conference, namely representation learning, so I suggest the authors resubmit this work elsewhere.\", \"cons\": [\"not clearly written\", \"not relevant to this conference\"]}" ] }
V_-8VUqv8h_H3
The Manifold of Human Emotions
[ "Seungyeon Kim", "Fuxin Li", "Guy Lebanon", "Irfan Essa" ]
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities.
[ "human emotions", "manifold", "model", "presence", "positive", "negative emotions", "text document", "higher dimensional extensions", "sentiment concept" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=V_-8VUqv8h_H3
https://openreview.net/forum?id=V_-8VUqv8h_H3
ICLR.cc/2013/conference
2013
{ "note_id": [ "DsMNDQOdK3o4y", "C4MuPqjpEwP7S", "zzCNIJyUdvSfw", "ADj5N2hoX0_ox" ], "note_type": [ "comment", "review", "comment", "review" ], "note_created": [ 1362951000000, 1362239340000, 1362951060000, 1362105540000 ], "note_signatures": [ [ "Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa" ], [ "anonymous reviewer e0d0" ], [ "Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa" ], [ "anonymous reviewer 9992" ] ], "structured_content_str": [ "{\"reply\": \"1. P(Y|Z) can be computed using Bayes rule on P(Z|Y). We had to remove lots of details due to the space limits. Detailed implementation is on our full paper on ArXiv (http://arxiv.org/abs/1202.1568).\\n\\n2. A lot of references and comparisons are omitted because of the space limits, but we will try to include suggested references and discussions.\"}", "{\"title\": \"review of The Manifold of Human Emotions\", \"review\": \"This paper proposes a new method for sentiment analysis of text\", \"documents_based_on_two_phases\": \"first, learning a continuous vector\\nrepresentation of the document (a projection on the mood manifold) and\\nsecond, learning to map from this representation to the sentiment\\nclasses. The assumption behind this model is that such an intermediate\\nsmooth representation might help the classification, especially in the case where the number of sentiment classes is rather large (32) as it is studied here.\\n\\nThe idea of modeling the relationship between emotions labels (Y) and\\ndocuments (X, encoded using bag-of-words) via an intermediate\\nrepresentation (Z) is appealing and seems to be a good direction to pursue.\\n\\nThe main idea of the present model is to build a kind of two-layers\\nnetwork(X->Z->Y), where each layers has its own architecture and learning\\nprocess and is trained in a (weakly) supervised way. Unfortunately, it is not\\nexactly clear how this training works. On one hand, the layer X->Z is trained via maximum likelihood, setting the supervision on Z via least-square regression for Z->Y (X and Y are known). But on the other hand, it is written that the layer Z->Y is obtained via MDS or kernel PCA. This is a bit puzzling.\\n\\nI also think that the dimension of the manifold (l) should be given.\\n\\nThere is a lack of references (maybe due to the page limit) Still,\\n(Glorot et al., ICML11), (Chen et al., ICML12) or (Socher et al.,\\nEMNLP11) should be discussed since all these papers presents neural\\nnetwork archiecture for sentiment analysis and basically learn an\\nintermediate representation of documents.\", \"pros\": [\"interesting setting with weak supervision\", \"new data\"], \"cons\": [\"many unclear points\", \"lack references\"]}", "{\"reply\": \"1. It is more related to latent variable models than neural network as it doesn\\u2019t have any activation function between layers. Moreover, neural network is learned by back-propagation algorithms, but our model is learnt using maximum likelihood with marginalizing latent variable Z. Linear regression part is result of Dirac\\u2019s delta approximation. Detailed implementation is on our full paper on ArXiv (http://arxiv.org/abs/1202.1568).\\n\\n2. Dimension of the manifold for experiments were 31 due to the fact of using MDS on centroids of 32 classes.\\n\\n3. A lot of references are omitted because of the space limits (3 pages!), but we will try to include a few key references.\"}", "{\"title\": \"review of The Manifold of Human Emotions\", \"review\": \"This paper introduces a model for sentiment analysis aimed at capturing blended, non-binary notions of sentiment. The paper uses a novel dataset of >1 million blog posts (livejournal) using 32 emoticons as labels. The model uses a Gaussian latent variable to embed bag of words documents into a vector space shaped by the emoticon labels. Experiments on the blog dataset demonstrate the latent vector representations of documents are better than bag of words for multiclass sentiment classification.\\n\\nI'm a bit confused as to how the inference procedure works. The conditional distribution P(Z|X) is Gaussian as is P(Z|Y), but the graphical structure suggests P(Y|Z) needs to be given and a Gaussian doesn't make sense here as Y is a binary vector. More generally, given the description in the paper I don't understand how to implement the proposed model. A more detailed description of the model itself and the inference procedure could help here.\\n\\nThere is a lot of recent work on representation learning for sentiment. No discussion of related models or comparisons to other work are given. In particular, recursive neural networks (e.g. Socher et al EMNLP 2011) have been used to learn document vector representations for multi-dimensional sentiment. Additionally, Maas et al (ACL 2011) introduce a benchmark dataset and similar latent-variable graphical model for sentiment representations. Overall, I think substantially more discussion of previous work is necessary. The two citations given aren't reflective of much of the recent work on learning text representations or sentiment.\", \"to_summarize\": [\"The proposed dataset sounds interesting and could advance representation learning for multi-dimensional sentiment analysis\", \"The proposed model is very unclear. With the current explanation I am unable to verify its correctness\", \"Practically no discussion of the large amount of previous work on learning representations for text and sentiment\"]}" ] }
KHMdKiX2lbguE
Boltzmann Machines and Denoising Autoencoders for Image Denoising
[ "KyungHyun Cho" ]
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high.
[ "boltzmann machines", "autoencoders", "image", "models", "noise", "experiments", "probabilistic model", "local image patches", "various researchers", "deep" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=KHMdKiX2lbguE
https://openreview.net/forum?id=KHMdKiX2lbguE
ICLR.cc/2013/conference
2013
{ "note_id": [ "PLgu8d4J3rRz9", "VC6Ay131A-y1w", "ppSEYjkaMGYj5", "CIGoQSPKoZIKs", "tO_8tX3y-7SXz" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1362189720000, 1362494700000, 1362411780000, 1362486600000, 1362361020000 ], "note_signatures": [ [ "anonymous reviewer bf00" ], [ "Kyunghyun Cho" ], [ "Kyunghyun Cho" ], [ "anonymous reviewer d5d4" ], [ "anonymous reviewer 9120" ] ], "structured_content_str": [ "{\"title\": \"review of Boltzmann Machines and Denoising Autoencoders for Image Denoising\", \"review\": \"This paper is an empirical comparison of the different models (Boltzmann Machines and Denoising Autoencoders) on the task of image denoising. Based on the experiments the authors claimed the increasing model depth improves the denoising performances when the level of noise is high.\\n\\nPROS\\n+ Exploring DBMs for images denosing is indeed interesting and important. \\nCONS\\n- There is little novelty in this paper. \\n- The experiments could be not easily reproduced since some important details of the experimental setting are not provided (see below).\\n- The proposed models were not compared with any state-of-the-art denoising method.\\n\\nDetailed comments\", \"page_4\": \"Equation 5 is not a standard routine. You should firstly make an assumption about noise, for example, $\\tilde{v}=v+n,nsim mathcal{N}(mu,,sigma^2)$. Then $P(\\tilde{v}|v)=\\nrac{P(\\tilde{v}|v)P(v)}{P(\\tilde{v})}propto P(\\tilde{v}|v)P(v)$.\", \"page_6\": \"The authors should describe how to construct training set in detail.\"}", "{\"review\": \"Dear reviewer (d5d4),\\n\\nThank you for your thorough review and comments.\\n \\n - 'the paper fails to compare against robust Boltzmann machines (Tang et al.,\\n CVPR 2012)'\\n \\n Thanks for pointing it out, and I agree that the RoBM be tried as well. It \\n will be possible to use the already trained GRBMs to initialize an RoBM to \\n see how much improvement the RoBM can bring. \\n \\n - 'More thorough analysis and better training might ... make the conclusion\\n more convincing.'\\n \\n One of the main claims in this paper was to show that a family of Boltzmann\\n machines is a potential alternative to denoising autoencoders which have\\n recently been proposed and shown to excel in image denoising. Also, another\\n was that it is possible to perform *blind* image denoising where no prior \\n information on noise types and levels was available at the training time. For\\n this, I have only conducted the limited set of experiments that barely\\n confirms these claims.\\n \\n I fully agree that follow-up research/experiments will reveal more insights \\n into the effect of model structures, training procedures and the choice of\\n training sets on the performance of image denoising. \\n \\n - 'How did you tune the hyperparameters?'\\n \\n This was one question to which I was not able to find a clear answer. Since\\n the task I considered was completely *blind*, meaning not even types of test\\n images were not known, I had to resort to using the reconstruction error on\\n the validation image patches, which, I believe, is not a good indicator of\\n the generalization performance in this case. \\n \\n I agree that more investigation is definitely required in this matter of \\n validation in image denoising. \\n \\n - 'whether the authors faithfully implemented Xie et al.\\u2019s method' \\n \\n The training procedure used in this paper is slightly different from the one \\n used by Xie et al. The procedure is also different from how Burger et al. \\n trained denoising autoencoders. Comparison to their trained models (for\\n instance, Burger et al. made their learned models parameters available onlin) \\n will be one of the potential next steps in this research.\"}", "{\"review\": \"Dear reviewers (bf00) and (9120),\\n\\nFirst of all, thank you for your thorough reviews. \\n\\nPlease, find my response to your comments below. A revision\\nof the paper that includes the fixes made accordingly will\\nbe available at the arXiv.org tomorrow (Tue, 5 Mar 2013\", \"01\": \"00:00 GMT).\\n\\nTo both reviewers (bf00) and (9120):\\n\\nThank you for pointing out the mistakes in some of the\\nequations. As both of you noticed, there was a problem in\\nEq. (4). There should be as many binary matrices $D_n$ as\\nthere are image patches from each test image. This mistake\\nhappened as I was trying to put the procedure into a more\\ncompact mathematical equation. There was no mistake in the\\nimplementation. I have fixed the equation and its\\naccompanying text description accordingly.\\n\\nAlso, in Eq. (5), the term inside the last expectation\\nshoudl be p(v | h) instead of p(\\tilde{v} | h). Thank you\\nagain for pointing that out. \\n\\n\\nTo reviewer (bf00):\\n\\n - 'The experiments could be not easily reproduced'\\n I have added the detailed configurations used for\\n training each model as an appendix. \\n\\n - 'models were not compared with any state-of-the-art\\n denoising method'\\n The aim of the paper was to propose an alternative deep\\n neural network model that might be used in place of\\n denoising autoencoders which were recently proposed to\\n excel in the task of image denoising. However, I agree\\n that the comparison with other approaches would make the\\n paper more interesting.\\n\\n - 'should describe how to construct training set in detail'\\n I have added how the training set was constructed.\\n\\n - 'high-resolution natural image data sets could be more\\n proper'\\n I fully agreed with you and thank you for the suggestion.\\n I have run the same set of experiment using the training\\n set constructed from the Berkeley Segmentation Benchmark\\n (BSD-500). The results closely resemble those presented\\n already in the paper, and the overall trend did not\\n change. I have appended the new figure (same format as\\n Fig. 2) obtained using the new training set in the\\n appendix.\\n\\n\\nTo reviewer (9120):\\n\\n - 'Proper layer-sizes cross validation should be performed'\\n I fully agree with you. One most important thing that\\n be checked, in my opinion, is the performance of\\n single-layer models having the same number of hidden\\n units as multi-layer models (e.g., GRBM with 640 and 1280\\n hidden units trained on 8x8 patches). I will run the\\n experiment, and if time permits, will add the results in\\n the paper.\\n\\n\\n - 'In eq of hat{v}_i'\\n Thank you for pointing it out. I have mistakenly put\\n p(h|\\tilde{v}), implicitly assuming a case of RBM with\\n binary hidden units, where p(h|\\tilde{v}) coincides with\\n E[h|\\tilde{v}]. However, for a general BM, you are\\n correct and I have fixed it accordingly.\"}", "{\"title\": \"review of Boltzmann Machines and Denoising Autoencoders for Image Denoising\", \"review\": \"A brief summary of the paper's contributions, in the context of prior work.\\nThe paper proposed to use Gaussian deep Boltzmann machines (GDBM) for image denoising tasks, and it empirically compared the denoising performance to another state-of-the-art method based on stacked denoising autoencoders (Xie et al.). From empirical evaluations, the author confirms that deep learning models (DBM or DAE) achieve good performance in image denoising. Although DAE performs better than GDBM in many cases, GDBM can be still useful for image denoising since it doesn\\u2019t require prior knowledge on the types or levels of noise.\\n\\n\\nAn assessment of novelty and quality.\\nThe main contribution of the paper is the use of Gaussian DBM for denoising. It also provides comparison against existing models (stacked denoising autoencoders). Although, technical novelty is limited, it is still interesting that GRBM without the knowledge of specific noise (in target tasks) can perform well for image denoising. \\n\\nOne major problem is that the paper fails to compare against a closely related work on robust Boltzmann machines (Tang et al., CVPR 2012), which is specifically designed for denoising tasks. \\n\\nConclusions drawn from empirical evaluation seem fairly reasonable, but not very surprising. Also, the results look somewhat random. More thorough analysis and better training might clean up the results and make the conclusion more convincing.\", \"other_comments\": \"How did you tune the hyperparameters (l2 regularization, learning rate, number of hidden nodes, etc.) of the model? The trained model is sensitive to these hyperparameters, so it should have been tuned to some validation task.\\n\\n\\nA list of pros and cons (reasons to accept/reject).\", \"pros\": [\"Empirical evaluation of two deep models on image denoising tasks seems to confirm the usefulness of deep learning methods for image denoising.\", \"It\\u2019s very interesting that models trained from natural images (CIFAR-10) work well for unrelated images.\"], \"cons\": [\"The main contribution of the paper is the use of GRBM/DBM for denoising. However, it\\u2019s not clear whether GRBM/DBMs are better than DAE(4).\", \"There is no comparison against robust Boltzmann machines (Tang et al., CVPR 2012).\", \"It would have been nice to make the results comparable to other published work (e.g., Xie et al.). The results in the paper raise questions about whether the authors faithfully implemented Xie et al.\\u2019s method.\"]}", "{\"title\": \"review of Boltzmann Machines and Denoising Autoencoders for Image Denoising\", \"review\": \"The paper conducts an empirical performance comparison, on the task of image denoising, where the denoising of large images is based on combining densoing of small patches. In this context, the study compares usign, as small patch denoisers, deep denoising autoencoders (DAE) versus deep Boltzmann machines with a Gaussian visible layer (GDBM, which correspond to GRBM for a single hidden layer). Compared to recent work on deep DAE for image denoising shown to be competitive with state-of-the-art methods (Burger et al. CVPR'2012; Xie et al. NIPS'2012) this work rather considers *blind* denoising tasks (test noise kind and level not the same as that used during training). For the DBM part, the work builds on the author's authors' GDBM (presented at NIPS 2011 workshop on deep learning), and performs denoising as the expectation of visibles given inferred expected first layer hidden obtained through varitional approximation.\\n\\nThe paper essentially draws the following observations \\na) GRBM / GDBM can be equally successful at image denoising as deep DAEs, \\nb) increased depth seems to help denoising, particularly at higher noise levels. \\nc) interestingly a GRBM (single layer) appears often competitive compared to a GDBN with more layers (while deeper DAEs more systematically improve over single layer DAE).\", \"pros\": [\"I find it is a worthy empirical comparison study to make.\", \"it reasonably supports observation a), which is not too surprising (also there's no clear winner).\", \"the observation I find most interesting, and worthy of further *digging* is c) as it could be, as suggested by the authors, a concrete effect of the limitations of the variational approximation in the GDBN.\"], \"cons\": [\"empirical performance comparison of similar models, but does not yield much insight regarding wherefrom differences may arise (no other sensitivity analysis except final denoising perofrmance)\", \"while I would a priori be inclined to believe in b), I find the methodology lacking here. It seems a single fixed hiddden layer size has been considered, the same for all layers, so that deeper networks had necessarily more parameters. Proper layer-sizes cross validation should be performed before we can hope to draw a scientific conclusion with respect to the benefit of depth.\", \"mathematical notation is often a little sloppy or buggy:\"], \"eq_4\": \"if D is n x d as claimed Dx will be n x 1, so it cannot correspond to n 'patches' as claimed (unless your patches are but 1 pixel).\", \"eq_5\": \"I belive last p(\\tilde{v}|h) should be p(v|h)\", \"next_eq\": \"p(v | h=mu) is an abuse since h are binary.\\nIn eq of hat{v}_i : p(h|\\tilde{v}) is problematic, since there's no bound value for h. Shouldn't it rather be E(h|\\tilde{v}) ?\"}" ] }
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
ICLR.cc/2013/conference
2013
{ "note_id": [ "WYMDnhGXd0L_5", "9pQNdTOGrb9Pw", "QOxbO7qFg2Och", "gWF1WlYIRPpoT", "Y8F18yu7HQ6aJ", "YlFHNQiVHDYVP", "cc18-e0C8uSHG", "OEMFOvtudWEJh", "KKA-Ef3zTjKbl" ], "note_type": [ "review", "review", "comment", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1363287900000, 1360229520000, 1364235300000, 1362215700000, 1361826300000, 1363996080000, 1363661460000, 1362274980000, 1362186300000 ], "note_signatures": [ [ "Vamsi Potluru" ], [ "Paul Shearer" ], [ "Vamsi Potluru" ], [ "anonymous reviewer 1d08" ], [ "Vamsi Potluru" ], [ "anonymous reviewer d723" ], [ "Vamsi Potluru" ], [ "anonymous reviewer 202b" ], [ "anonymous reviewer d723" ] ], "structured_content_str": [ "{\"review\": \"Thanks to all the reviewers for their detailed and insightful comments and suggestions.\\n\\nWe are working on incorporating most of them in to our paper and should have the updated version this weekend.\"}", "{\"review\": \"The main convergence result in the paper, Theorem 3, does not prove what it purports to prove. Specifically the proof of Theorem 3 refers to a completely different optimization problem than the one the authors claim to be solving on page 5 and throughout the paper.\\n\\nIn the proof the authors replace the nonconvex constraint ||W_j||_2 = 1 on page 5 with the convex relaxation ||W_j||_2 <= 1. This relaxation appears to be standard, but it actually allows W_j to become arbitrarily nonsparse, for one may decrease L2 norm of a given W_j (while keeping ||W_j||_1 = k) simply by averaging a given W_j with a constant vector. Allowing arbitrary nonsparsity defeats the point of the proposed model, which is to maintain the sparsity of the W_j.\\n\\nTo keep the L1/L2 ratio bounded and thus maintain sparsity, the inequality should go in the other direction: ||W_j||_2 >= 1. But this is a nonconvex set so Nesterov's theorems do not apply. Theorem 3 for this problem must be proven by a different route (see for example Attouch 2011, http://www.optimization-online.org/DB_FILE/2010/12/2864.pdf), or one could forget proof and just say the algorithm seems to work fine empirically.\"}", "{\"reply\": \"Thanks again for your detailed comments. We will incorporate them into our paper.\"}", "{\"title\": \"review of Block Coordinate Descent for Sparse NMF\", \"review\": \"Summary:\\n\\nThe paper presents a new optimization algorithm for solving NMF problems with the Euclidean norm as fitting cost and subject to sparsity constraints. The sparsity is imposed explicitly by adding an equality constraint to the optimization problem, imposing the sparsity measure proposed in [10] (referred as L1/L2 measure) of the columns of the matrix factors to be equal to a pre-defined constant. The contribution of this paper is to propose a more efficient optimization procedure for this problem. This is obtained mainly due to two variations on the original method introduced in [10]: (i) a block-coordinate descent strategy (ii) a fast algorithm for minimizing the subproblems involved in the obtained block coordinate scheme. Experimental evaluations show that the proposed algorithm runs substantially faster than previous works proposed in [7] and [10]. The paper is well written and the problem is clearly presented.\", \"pros\": \"- The paper presents an algorithm to solve an optimization problem\\nthat is significantly faster than available alternatives.\", \"cons\": [\"it is not clear why this particular formulation is better than other similar alternatives that can be efficiently optimized\", \"the proposed approach seems limited to work with the L2 norm as fitting cost.\", \"the convergence results for the block coordinate scheme is not\", \"applicable to the proposed algorithm\"], \"general_comment\": \"1.\\n\\nThe measure used for sparsifying the NMF is an L1/L2 measure proposed in [10] (based on the relationship between the L1 and L2 norm of a given vector). The authors list interesting properties of this measure to justify its use and it seems a good option.\\n\\nI understand that it is not the purpose of this paper to study or compare different regularizers. However, I believe that the authors should provide clear examples where this precise formulation (with the equality constraint) is better. Maybe even empirical evaluation (or a reference to a paper performing this study). Having a hard constrain in the sparsity level for every data code (or dictionary atom) seems too restrictive.\\n\\nThis is a very relevant issue, since explicitly imposing the sparsity constraint leads to a harder optimization problem with slower optimization algorithms (as explained by the authors). An important modeling advantage is required to justify the increase in complexity.\", \"in_the_work\": \"Berry, M. W., et al. 'Algorithms and applications for approximate nonnegative matrix factorization.' Computational Statistics & Data Analysis 52.1 (2007): 155-173.\\n\\nthe authors adopt the sparsity measure form [10] but include it on a Lagrangian formulation. This implicit way of imposing sparsity can be combined with other fitting terms (e.g. beta divergences) and it is easier to optimize.\", \"this_was_done_with_a_very_similar_sparsity_measure_in_the_work\": \"V, Tuomas. 'Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria.' Audio, Speech, and Language Processing, IEEE Transactions on 15.3 (2007): 1066-1074.\\n\\nThe author proposes to add to the cost function a sparsity regularization term also of the form L1/L2 and was later used for audio source separation in:\\n\\nW. Felix, J. Feliu, and B. Schuller. 'Supervised and semi-supervised suppression of background music in monaural speech recordings.' Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, 2012.\\n\\n2.\\n\\nThe strategy proposed in this paper alternatively fixes one matrix and minimizes over the other in a block coordinate fashion. In contrasts with [10], in which only a descent direction is searched. Maybe here there is another reason for the speed-up?\\n\\nWhen the minimization is performed on a matrix factor that is subject to the sparsity constraint the authors employ a block coordinate descent strategy, referred to as sequential-pass. The authors empirically demonstrate that this strategy leads to a significant improvement in time.\\n\\nThe minimization over each block (or columns) leads to a linear maximization problem subject to constraining the L1 and L2 norm to be constant, referred as sparse-opt. The authors propose an algorithm for exactly solving this subproblem. This problem also appears in [10] but in a slightly different way. In [10] the author proposes a heuristic method for projecting a vector to meet the sparsity constraints (sort of a proximal projection but to a non-convex set).\\n\\nIn Theorem 3, the authors present a convergence result for a relaxed version of the sequential-pass. Specifically, they relax the constraint on the L2 norm to be an inequality (instead of an equality). In this new setting, imposing the L1 norm to be constant no longer implies that the sparsity measure is constant. The quotient L1/L2 should be used instead, but this is no longer coincides with the sparse-opt problem.\", \"other_minor_comments\": [\"In Section 2.2 and later in Section 6, the authors refer to the properties that a good sparsity measure should have, according to [12]. I think that it would help the clarity of the presentation to briefly list these properties in Section 2.2 instead of defining them within the text of Section 6.\", \"In Section 3.1, the equation for the Lagrangian of the problem (5) should also include the term corresponding to the non-negativity constraint on y. This does not affect the derivation, since the multipliers for that constraint would be zero when the y_i are not, thus the obtained values of lambda, mu and obj would remain the same.\"]}", "{\"review\": \"Thanks a lot for pointing this out. You are right about the issue. We\\nare currently working on fixing the proof, as we hope that in our\\nparticular case the objective function will force the L2 equality\\nconstraint to be active at the optimum. The algorithm does still work\\nfine in practice, and we have never encountered an occurrence of\\ndivergence in our experiments. We will take out the proof if we\\ncannot fix it by the review deadline.\"}", "{\"review\": [\"Dear authors,\", \"the revision of your paper is appreciated; the three major issues from my review have been resolved.\", \"I agree that explicit constraints may be harder to optimize, but the argument that then (non-expert) users can get the representation they want without fiddling parameters is a very good one and does motivate this line of research. I don't think a radiologist would want to spend too much time analyzing a brain scan using non-intuitive knobs. It would be nice if you could add some fMRI or sMRI images to enhance Figure 1.\", \"Here is a short list of things that may help you improve your paper (I emphasize that these are no must-haves):\", \"Page 5, replace 'i' with 'i = 1' in the sums of the Lagrangian's derivatives (this would be consistent with the other sums).\", \"In the first line after the derivatives, add gamma to the Lagrange parameters.\", \"In the same paragraph, mention that the termination criterion from the for loop of Sparse-opt is equivalent to selecting the one that maximizes b' * y (you may want to cite [5] there).\", \"In Algorithm 2 (Sparse-opt), p^star should be initialized just before the for loop (for the reason given in my review).\", \"In line 3 of Algorithm 2, replace the two-element set (with ceil(k^2) and m) with an ordinary for-loop (with the new termination criterion, the order in which {ceil(k^2), ..., m} is traversed is important).\", \"Page 7, Section 5.1: Tidy up the third bullet point (it's somehow intermixed with the van Hateren data set which you don't seem to use anyways).\", \"Same section, fourth bullet point: use \\tt or \\u000for the URL to SPM5 (this would be consistent with the style you used for the other URLs). Add a point at the end of the text of that bullet point.\", \"Page 8, Section 5.2, last paragraph: You should add one sentence how the running times behave when Bi-Sparse NMF from the Appendix is used, as there a Sparse-opt is carried out for high-dimensional vectors (dimension there equals the number of samples for the rows of H).\", \"Page 10: Figure 6 should be moved to be on the same page as Section 5.2, where it is referenced.\", \"Page 11, Section 7: Remove the 'heuristic' in the final line of the first paragraph.\", \"Page 12, Reference [15] (Hsieh and Dhillon): The page here still reads 'xx'.\", \"In Figures 4, 5 and 6 the y axes from the upper rows interfere with those of the lower rows.\", \"There are also some stray white spaces throughout the paper that should be fixed.\"]}", "{\"review\": \"Anonymous d723:\\n\\n1. Thanks for pointing out the bug in the projection operator algorithm Sparse-opt. We re-ran all the algorithms on\\nthe datasets based on the suggested bugfix and generated new figures for all the datasets. \\n2. We highlight the efficiency of our algorithm (O(mlogm)) compared to worst-case scenario of O(m^2) for \\nHoyer's projection algorithm. Our algorithm can be further improved to have linear time complexity by using a \\npartitioning algorithm similar to the one in Quicksort.\\n3. We have fixed most of the issues such as references to parallel updates, increasing figure sizes, modifying citations, and \\nadding line numberings.\", \"anonymous_1d08\": \"1. Sparsity on the features can be set as user-defined intervals. This is illustrated on the ORL face dataset where\\nwe are able to learn sets of local and global features. In practice, this enables the user to finely tune the \\nfeatures based on domain knowledge. For instance, in fMRI datasets, one could model the features for the \\nbrain signals and that of the artifacts distinctly based on different sparsity intervals.\\n\\n2. Implicit regularization may lead to easier optimization problems but can be harder to interpret from a user point of view.\\nThe regularization parameter maps to sparsity values of the features but it is hard to know what this mapping should be before\\nthe algorithm is run.\\n\\n3. We have fixed the Lagrangian formulation to include the nonnegativity constraints and added a brief list of desirable \\nproperties to section 2.2.\", \"anonymous_202b\": \"Sparse PCA and dictionary learning are slightly different formulations than the one considered here. \\nAlso, SPAMS does not consider the exact formulation of the problem we are tackling in this paper. \\nWe are solving a explicitly constrained sparsity problem and this relates to the question posed by reviewer 1d08.\\nSo, a direct comparison of running-times for algorithms solving different problem formulations would not be fair. \\nHopefully, the cost of running time of our algorithm pays off for applications where explicitly modeling the user requirements \\nis of primary importance.\\n\\n\\n-----------------\\n\\nWe have removed the convergence proof from the present draft based on the comments from the reviewers\\nand Paul Scherrer. However, we are looking into fixing the proof for the final version.\\nAlso, we are looking into other examples where one would like to explicitly constrain the sparsity of\\nthe factorization.\\n\\nThanks again to all the reviewers for the constructive suggestions and insightful questions.\\n\\nIf the arxiv version is not updated by view time, please find a copy at:\", \"http\": \"//www.cs.unm.edu/~ismav/papers/ncnmf.pdf\"}", "{\"title\": \"review of Block Coordinate Descent for Sparse NMF\", \"review\": \"This paper considers a dictionary learning algorithm for positive data. The sparse NMF approach imposes sparsity on the atoms, and positivity on both atoms and decomposition coefficients.\\n\\nThe formulation is standard, i.e., applying a contraint on the L1-norm and L2-norm of the atoms. The (limited) novelty comes in the optimization formulation: (a) with respect to the atoms, block coordinate descent is used with exact updates which are based on an exact projection into a set of constraints (this projection appears to be novel, though the derivation is standard), (b) with respect to the decomposition coefficients, multiplicative updates are used.\\n\\nRunning-time comparisons are done, showing that the new formulation outperforms some existing approaches (from 2004 and 2007).\", \"pros\": \"-Clever use of the structure of the problem for algorithm design\", \"cons\": \"-The algorithm is not compared to the state of the art (there has been some progress in sparse PCA and dictionary learning since 2007). In particular, the SPAMS toolbox of [19] allows sparse dictionary learning with positivity constraints. A comparison with this toolbox would help to assess the significance of the improvements.\\n-Limited novelty.\"}", "{\"title\": \"review of Block Coordinate Descent for Sparse NMF\", \"review\": \"This paper proposes new algorithms to minimize the Non-negative Matrix Factorization (NMF) reconstruction error in Frobenius norm subject to additional sparseness constraints (NMFSC) as originally proposed by [R1]. The original method from [R1] to minimize the reconstruction error is a projected gradient descent. While in [R1] a geometrically inspired method is used to compute the projection onto the sparseness constraints, this paper proposes to use Lagrange multipliers instead. To solve the NMFSC problem, the authors propose to update the basis vectors one at a time (therefore their method is called Sequential Sparse NMF or SSNMF), while in ordinary NMF/NMFSC the entire matrix with the basis vectors is updated at once. Experiments are reported that show that SSNMF is one order of magnitude faster compared to the algorithm of [R1].\\n\\nThe paper may only propose more efficient algorithms to solve a known optimization problem instead of proposing new learnable representations, but the approach is interesting and the results are promising. There are however some major issues with the paper:\\n\\n(1) The sparseness projection of [R1] is essentially a Euclidean projection onto the intersection of a scaled probabilistic simplex (L1 sphere intersected with positive cone) and the scaled unit sphere (in L2 norm). The method of [R1] to compute this projection is an alternating projection algorithm (similar to the Dykstra algorithm for convex sets). The method was proven correct by [R2], and additionally it was shown that the projection is unique almost everywhere. Therefore, the method of [R1] and Algorithm 2 of the paper (Sparse-opt) should almost always compute the same result. In the paper, however, the sparseness projection of [R1] is denoted the 'projection-heuristic' while Sparse-opt is called 'exact', and when the projection of [R1] is used in the SSNMF algorithm instead of Sparse-opt the reconstruction error is no more monotonically decreasing as optimization proceeds. As both projection algorithms should compute the same, the plot should be identical for them when using the same starting points. Section 5.2 of the paper should be enhanced to verify whether both algorithms actually compute the same result and to find the bug that causes this contradiction.\\n\\n(2) The proposed Algorithm 2 can be considered a (non-trivial) extension of the projection onto a scaled probabilistic simplex as described by [R3] and is a valuable contribution. In the paper, there is however a bug in the execution (which may explain the discrepancies described in Issue (1)): There are no multipliers that enforce the entries of the projection to be non-negative, as would be required by Problem (5) in the paper. Analogously, in Algorithm 2 there is no check in the loop of Line 2 to guarantee the values for lambda and mu produce a feasible (that is non-negative) solution. I implemented the algorithm in Matlab and compared it to the sparseness projection of [R1] (which is freely available on the author's homepage). In the algorithm as given in the paper, p_star always equals m after line 3 and no correct solution to Problem (5) is found in general. If I add the check for a feasible solution, both Sparse-opt and the sparseness projection of [R1] compute numerically equal results. I first suspected there was a typo in the manuscript, but that still would not explain the contradictory results from Section 5.2 of the paper.\\n\\nOn the positive side, I did check the expressions for lambda, mu and obj as given in Algorithm 2, and found them correct. Further, the algorithm is empirically faster than that of [R1], and its run-time is guaranteed theoretically to be at most quasilinear.\\n\\nBased on the bugfix, I realized that the method from [R4] could be adapted to Sparse-opt to further enhance its run-time efficiency: Set p_star to m before the for loop of line 2 (in case all elements of the projection will be non-zero). Then, after computation of lambda and mu (obj does not need to be computed anymore with this modification), check if a_p < -mu(p) holds. If it does, set p_star to p - 1 and break the for loop. Line 3 of the algorithm should then be omitted. This modification fixes the algorithm, and additionally obj is not needed, and for lambda and mu simple scalars are sufficient to store at most two values of each.\\n\\n(3) As noted by Paul Shearer and confirmed by the first author of the paper (see public comments), the proof of Theorem 3 is flawed as the arguments there would only apply if the sparseness constraints would induce a convex set (which they don't). I wouldn't have any objections if Theorem 3 and its proof were withdrawn and removed from the manuscript.\\n\\nMoreover, I verified Algorithm 3 from the paper and found no obvious bugs. I implemented all algorithms and ran them on the ORL face data set and found that SSNMF computes a sparse representation. I did not check what happens without the bugfix for Algorithm 2, though. The authors should definitely fix the major issues and repeat the experiments before publication (judging from the run-time given in Figure 3 and Figure 4 this shouldn't take too long).\", \"there_are_some_minor_issues_too\": [\"It should be briefly discussed whether SSNMF could benefit from a multi-threaded implementation as NMF/NMFSC do (in the experiments, the number of threads was set to one).\", \"Figures should be enlarged and improved such that a difference between the plots is also noticeable when printed in black and white on letter size paper.\", \"The references should be polished to achieve a consistent style (remove the URLs and ISSNs, don't use two different styles for JMLR ([7] and [10]) and NIPS ([6] and [17]), fix the page of [11], add volume and issue to [15] and [23], add the journal version of [21] unless that citation is withdrawn with Theorem 3, etc.).\", \"Always cite using numbers in the main text ('cite{}') instead of using only the author names ('citeauthor{}') (e.g. Hoyer, Kim and Park, etc.), because now [9], [10] and [13], [14] could be confused.\", \"The termination criteria should be described more elaborately for Algorithms 1, 3, and 4.\", \"Page 2, just after Expression (1): This is only a convex combination if the rows of H are normed (wrt. L1), otherwise it's a conical combination.\", \"Page 2, just after Expression (2): We use *subscripts* to denote... (missing s). Please also define what H_j^T would mean (is it (H_j)^T or (H^T)_j or something else?).\", \"It would be nice to add line numbers to all algorithms (some have ones, some don't).\", \"In Algorithm 3, Line 7: This should probably read G_j^T, as i is not defined here?\", \"Mention the number of images for the sMRI data set in Section 5.1, and use '\\u0000' or a footnote for the URL there.\", \"Cite [7] in the third bullet point in Section 2.2.\"], \"references\": \"[R1] Hoyer. Non-negative Matrix Factorization with Sparseness Constraints. JMLR, 2004, vol. 5, pp. 1457-1469.\\n[R2] Theis et al. First results on uniqueness of sparse non-negative matrix factorization. EUSIPCO, 2005, vol. 3, pp. 1672-1675.\\n[R3] Duchi et al. Efficient Projections onto the l1-Ball for Learning in High Dimensions. ICML, 2008, pp. 272-279.\\n[R4] Chen & Ye. Projection Onto A Simplex. arXiv:1101.6081v2, 2011.\"}" ] }
aQZtOGDyp-Ozh
Learning Stable Group Invariant Representations with Convolutional Networks
[ "Joan Bruna", "Arthur Szlam", "Yann LeCun" ]
Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do not account for all the variability of complex signal classes. We show that the invariance properties built by deep convolutional networks can be cast as a form of stable group invariance. The network wiring architecture determines the invariance group, while the trainable filter coefficients characterize the group action. We give explanatory examples which illustrate how the network architecture controls the resulting invariance group. We also explore the principle by which additional convolutional layers induce a group factorization enabling more abstract, powerful invariant representations.
[ "variability", "invariance group", "convolutional networks", "translations", "rotations", "express part", "many recognition problems", "group structure" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=aQZtOGDyp-Ozh
https://openreview.net/forum?id=aQZtOGDyp-Ozh
ICLR.cc/2013/conference
2013
{ "note_id": [ "s1Kr1S64z0s8a", "uLsKzjPT0lx8V", "7XaieIunN4X1I" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1362379800000, 1361928660000, 1363658220000 ], "note_signatures": [ [ "anonymous reviewer 3316" ], [ "anonymous reviewer bf60" ], [ "Joan Bruna" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Stable Group Invariant Representations with Convolutional\\n Networks\", \"review\": \"This short paper presents a discussion on the nature and the type of invariances that are represented and learned by convolutional neural networks. It claims that the invariance a layer in a convolutional neural network can be expressed with a Lie group, and that the invariance of a deep convolutional neural network can be expressed with a product of groups.\\n\\nThis is a discussion paper that is difficult to understand without being familiar with group theory. It would be easier to read if there were even toy examples that illustrate the concepts presented in this work. In its current form the paper is incomplete; to be useful, it needs to use these ideas to somehow improve the training or generalization of convolutional neural networks. On a related note, it is hard to understand the significance of the results. So the invariance of a deep convolutional neural network can be expressed with a semi-direct product of some groups; this is nice, but what does it lead to, how can it be used? \\n\\nTo summarize, paper has intriguing ideas, but they are not sufficiently developed, and their significance is not clearly explained.\"}", "{\"title\": \"review of Learning Stable Group Invariant Representations with Convolutional\\n Networks\", \"review\": \"I fully admit that I don't know enough about group theory to evaluate this submission. However, I do know about convolutional networks, so it is troubling that I can't understand it.\\n\\nSince this is only a workshop paper, we're not going to look for a new reviewer.\\n\\nWhen you do eventually pursue conference publication, I would suggest that you consider the audience and adapt the presentation somewhat, so that people who are familiar with convolutional networks but not with group theory will be able to get an idea of what the paper is about, and can read about the appropriate subjects to be able to understand it better.\\n\\nI would also suggest providing a high level summary of the paper that makes it clear what you consider your original contributions to be. I had a hard time telling what was original content and what was just describing what convolutional networks are in group theory notation.\"}", "{\"review\": \"I would like to thank the reviewers for their time and constructive comments.\\nIndeed, the paper, in its current form, explores the connection between deep convolutional networks and group invariance; but it lacks practical examples to motivate why this connection might be useful or interesting.\\nI completely agree in that the paper is difficult to read and could be made much more accessible. Together with the practical aspects mentioned in the last section, this will be my priority. \\nThank you again.\"}" ] }
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment.
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
ICLR.cc/2013/conference
2013
{ "note_id": [ "KB-5ppfbu7pwL", "SPfmPG0ry9nrB", "XHzDeHdtlbXIc", "fvJTwf6BDQvYu", "Od6cRb72yhb2P", "9IkTIwySTQw0C", "vDY7MvZACzMTc" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1362170160000, 1362361260000, 1362455040000, 1362019200000, 1363702380000, 1362043620000, 1362181620000 ], "note_signatures": [ [ "anonymous reviewer 5a71" ], [ "anonymous reviewer 2611" ], [ "Arun Tejasvi Chaganty" ], [ "anonymous reviewer 5b0f" ], [ "Christian Scheible" ], [ "Sida Wang" ], [ "Sam Bowman" ] ], "structured_content_str": [ "{\"title\": \"review of Cutting Recursive Autoencoder Trees\", \"review\": \"The paper considers the compositional model of Socher et al. (EMNLP 2011) for predicting sentence opinion polarity. The authors define several model simplification types (e.g., reducing the maximal number of levels) and study how these changes affect sentiment prediction performance. They also study how well the induced structures agree with human judgement, i.e. how linguistically plausible (both from syntactic and semantic point of view) the structures are.\\n\\nI find this work quite interesting and the (main?) result somewhat surprising. What it basically shows is that the compositional part does not seem to benefit the sentence-level sentiment performance. In other words, a bag-of-words model with distributed word representations performs as well (or better). An additional, though somewhat small-scale, annotation study show that the model is not particularly accurate in representing opinion shifters (e.g., 'not' does not seem to reverse polarity reliably). \\n\\nThough some of the choices of model simplification seem relatively arbitrary (e.g., why choosing a single subtree, rather than, e.g., dropping several subtrees within some budget?) and the human evaluation is somewhat small scale (and, consequently, not entirely convincing), I found the above observation interesting and important.\\n\\nIt would also be interesting to see if the compositional model appears to be more important when an actual syntactic tree (as in Socher et al (NIPS 2011) for paraphrase detection) is used instead of automatically inducing the structure. \\n\\nOne point which might be a little worrying is that the same parameters are used across different learning architectures, though one may expect that different regularizations and training regimes might be needed. However, the full model is estimated with the parameters chosen by the model designers on the same datasets, so it should not affect the above conclusion.\", \"pros\": \"-- It provides interesting analysis of the influential model of Socher et al (2011)\\n-- Both analysis of linguistic plausibility are provided and analysis of the effect of model components on sentiment prediction performance. Though, the original publication (Socher et al, EMNLP 2011) contained the BOW baseline it was not exactly comparable, the flat model studied here seems a more natural baseline.\", \"cons\": \"-- Semantic and syntactic coherence analysis may be too small scale to be seriously considered (2 human experts on a couple dozens of examples).\"}", "{\"title\": \"review of Cutting Recursive Autoencoder Trees\", \"review\": \"This research analyses the Semi Supervised Recurive Autoencoder (RAE) of Socher et al., obtained with the NLP task of sentiment classification from sentences of movie reviews.\\n \\nA first qualitative analysis conducted wth th help of human annotators, reveals that the syntactic and semantic role of reversers ('not') is not modeled well in many cases.\\n\\nThen a systematic quantitative analysis is conducted, using the representation of a sentence as the average of the representation output at each node of the tree to train a classifier of sentiment, and analysing what is lost or gained by using only specific subsets of the tree nodes. These results clearly indicate that intermediate nodes bring no additional value for classfication performance compared to using only the word embeddings learned at the leaf nodes. \\nThe full depth of the tree appears to extract no more useful information than the leaf nodes only.\", \"pros\": \"I believe that this paper's analysis is a significant contribution with an important message. It is well conducted and properly questions and sheds light on the meaning and usefulness of the tree *structure* learned by RAEs, showing that drastic structure simplifications yield the same state-of-the-art performance on the considered classification task. It has the potential to start a healthy controversy, that will surely seed interest into further investigation of this important point.\", \"cons\": \"The message would carry much more weight if a similar analysis of RAEs could be conducted also on several other (possibly more challenging) NLP tasks than movie sentiment classification and pointed towards similar conclusions.\"}", "{\"review\": \"The paper presents a very interesting error analysis of recursive autoencoder\\ntrees. However, I would wish the following aspects of the evaluation were\\naddressed.\\n\\na) In the qualitative analysis (Section 5), only 10 samples out of a corpus of\\nover 10,000 were studied. This is too small to make any statistically\\nsignificant statements.\\n\\nb) When describing the behaviour on sentences with reversing constructions, it\\nis not clear how the RAE trees actually predicted the sentence; were the three\\ncorrect instances those with reversed sentiment, suggesting that the RAE trees\\nalways reverse the sentiment when a reverser appeared?\\n\\nc) Looking at the results from the quantitative analysis, the fact that the RAE\\ntrees predict with a full 77.5% accuracy despite random feature embeddings\\nseems to be a strong signal that the parse structure is playing a very\\nimportant role. This conflicts with the qualitative analysis that\\ncompositionality is not well modelled by RAEs. I feel more space should be\\ndedicated to discussing this result. If the real compositionality modelled by\\nthe RAE trees is for intensifying constructors, it should be possible to\\nevaluate intensifying constructions by comparing the softmax classification\\nweights for a sentiment.\"}", "{\"title\": \"review of Cutting Recursive Autoencoder Trees\", \"review\": \"This paper analyzes recursive autoencoders for a binary sentiment analysis task.\", \"the_authors_include_two_types_of_analyses\": \"looking at example trees for syntactic and semantic structure and analyzing performance when the induced tree structures are cut at various levels.\\nMore in depth analysis of these new models is definitely an interesting task. Unfortunately, the presented analyses and conclusions are incomplete or flawed.\", \"tree_cutting_analysis\": \"This experiment explores the interesting question of how important the word vectors and tree structures are for the RAE.\\nThe authors incorrectly conclude that 'the strength of the RAE lies in the embeddings, not in the induced tree structure'.\\nThis conclusion is reached by comparing the following two models (among others):\\n1) A word vector average model with no tree structures that uses about 50x10,000 parameters (50 dimensional word vectors and a vocabulary of around 10,000 words) and reaches 77.67% accuracy.\\n2) A RAE model with random word vectors that uses 50 x 100 parameters and gets 77.49% accuracy.\\n\\nAn accuracy difference of 0.18% on a test set of ~1000 trees means that ~2 sentences are classified differently and is not statistically significant. So the results of both models are the same.\\nThat means that the RAE trees achieved the same performance with 1/100 of the parameters of the word vectors. So, the tree structures seem to work pretty well, even on top of random word vectors.\\n\\nA good comparison would be between models with the same number of parameters. Both models could easily be increased or decreased in size.\\n\\nOne possible take away message could have been that the benefits of RAE tree structures and word embeddings are equal but performance does not increase when both are combined in a task that only has a single label for a full sentence.\", \"but_even_that_one_is_difficult\": \"All columns of the main results table (cutting trees) have the same top performance when it comes to statistical significance, so it would have also been good to look at another dataset.\\nAnother problem is that the RAE induced vectors are only used by averaging all vectors in the tree. \\n\\nMore important analyses into the model could explore what the higher node vectors are capturing by themselves instead of only in an average with all lower vectors.\", \"tree_structure_analysis\": \"The first analysis is about the induced tree structures and finds that that they do not follow traditional parsing trees. \\nThis was already pointed out in the original RAE paper and they show several examples of phrases that are cut off.\\nAn interesting comparison here would have been to apply the algorithm on correct parse trees and analyze if it makes a difference in performance.\\n\\n\\nThe second analysis is about sentiment reversal, such as 'not bad'. \\nUnfortunately, the given binary examples are hard to interpret. \\nAre phrases like 'not bad' positive in the original training data? It's not clear to me that 'not bad' is a very positive phrase.\\nDo the probabilities change in the right direction? When does it work and when does it not work? Is the sentiment of the negated phrase wrong or is the negation pushing in the wrong direction?\\nIn order to understand what the model should learn, it would have been interesting to see if the effects are even in the training dataset.\\nAnother interesting analysis would be to construct some simple examples where the reversal is much clearer like 'really not good'.\\n\\n\\nThe paper is well written.\", \"only_one_typo\": \"E_cE and E_eC both used.\"}", "{\"review\": \"Thanks everyone for your comments! I would like to address some of the points made across various comments.\\n\\nI would like to point out to reviewer 'Anonymous 5b0f' that, in the experiment 'noembed', while the embeddings are not used in the classifier, they are still learned during RAE training . Thus, to train the RAE, we do indeed need 50x100 + 50x10,000 parameters, thus making RAE training more complicated than using embeddings only. Training the RAE without any embeddings produces results similar to 'noembed' line 1.\\n\\nRegarding the tree structures, we found that they do no influence the results too much. We achieve around 74% accuracy by simply enforcing iterative combinations from left to right using a one-sided recursion rule.\\n\\nI agree with the point that a binary classification task less complicated than for example a structured-prediction task and thus is too simple to show an improvement with a structural model. I find the result interesting nevertheless, structural understanding should help in sentiment analysis -- at least from a linguistic point of view. However, the RAE model does not seem to capture these properties very well. Socher et al. presented a matrix-vector-based approach at EMNLP 2012 which addresses this problem and is more suitable for modeling compositionality.\\n\\nIt is true that the human evaluation is rather small-scale. We intended this analysis to illustrate the point. Regarding the point about one of the examples, I see that 'not bad' is in itself not too positive, but I (and our annotators) would think that 'not bad at all' is positive.\"}", "{\"review\": \"I've also done some (unpublished) analysis with using random and degenerate tree structures and found that it did not matter very much under the RAE framework. I just have a short comment for the results table.\\n\\nGiven that most of the different schemes eventually got us roughly identical results near including the original RAE, 77.6% and knowing that Naive Bayes/Logistic Regression get an accuracy of around 78% with bag-of-words features and one can get over 79% with bag-of-bigrams features [Wang and Manning, Baselines and Bigrams: Simple, Good Sentiment and Topic Classi\\ufb01cation, ACL 2012]. \\n\\nIt seems to me that one conclusion from these results is that many schemes will perform similarly once enough information is preserved in the training features. If around 80% accuracy is what a fairly general purpose machine learning algorithm can possibly be expected to do on this dataset without outside information, then one does not have to be very clever with a correct discriminative method to do just slightly worse than Naive Bayes/Logistic Regression.\\n\\nYour results do suggest that the particular structure does not matter very here, neither does the embedding. But I think to really determine if the structure is doing anything, one should repeat this analysis in a place where the model with the structure is way better than the generic-linear-model-with-moderately-informative-features-benchmark, preferably without using extra knowledge.\"}", "{\"review\": \"I was very impressed by some of these results\\u2014especially those for the noembed models\\u2014and this does seem to provide evidence that the high performance of RAEs on sentence-level binary sentiment classification need not reflect a breakthrough due to the use of tree structures.\", \"there_were_a_couple_of_points_that_i_would_like_to_see_brought_up\": \"There appears to have been follow up work by some of the same authors on RAE models for other related tasks, and it seems somewhat unfair to claim that 'the trees and the embeddings model the same phenomena,' when using one particularly uncomplicated domain of phenomena (binary sentiment) as a case study.\\n\\nLess critically, I would like to see some discussion of (and further investigation into) the extremely poor performances seen with the sub and win models. If I understand correctly that the best-class baseline should achieve at least 50% accuracy, achieving a result substantially worse than seems to reflect a robust and potentially interesting result about the role of strongly positive and strongly negative words.\"}" ] }
ttxM6DQKghdOi
Discrete Restricted Boltzmann Machines
[ "Guido F. Montufar", "Jason Morton" ]
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code
[ "boltzmann machines", "discrete", "models", "hidden variables", "number", "cardinalities", "state spaces", "probability distributions", "products", "simplices" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=ttxM6DQKghdOi
https://openreview.net/forum?id=ttxM6DQKghdOi
ICLR.cc/2013/conference
2013
{ "note_id": [ "uc6XK8UgDGKmi", "AAvOd8oYsZAh8", "_YRe0x39e7YBa", "86Fqwo3AqRw0s", "gE0uE2A98H59Y" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1363572060000, 1362487980000, 1363534860000, 1362471060000, 1360957080000 ], "note_signatures": [ [ "Guido F. Montufar, Jason Morton" ], [ "anonymous reviewer fce0" ], [ "Aaron Courville" ], [ "anonymous reviewer 1922" ], [ "anonymous reviewer e437" ] ], "structured_content_str": [ "{\"review\": \"We appreciate the comments of all three reviewers. We posted a revised version of the paper to the arxiv (scheduled to be announced March 18 2013).\\n\\nWhile reviewer 1922 found the paper ``comprehensive'' and ``clearly written'', reviewers e437 and fce0 were very concerned with the presentation of the paper, describing it as ``clearly not written for a machine learning audience'' and ``As it is, the paper does not cater to a machine learning crowd '' and recommended ``this paper should be submitted to a journal'' (e437) and ``I advise the authors to either: - submit it to an algebraic geometry venue - give as many intuitions as possible to help the reader get a full grasp on the results presented. '' (fce0). \\n\\nHaving these recommendations in mind we recognized how certain parts of the original paper might have been too technical to be presented in this venue. We decided to revise the paper focusing on the results that could be most interesting for ICLR, providing a more intuitive picture of the main results, and to treat the purely mathematical problems elsewhere. \\n\\nWe significantly shortened the paper from 20 to 11.5 pages + references. We reorganized the entire paper in order to improve the readability and reduce the number of definitions and concepts used throughout. In the revision we focus on the main results which do not require much mathematical background. Following the recommendation ``it is unreasonable to put all the proof in the supplementary material where they are unlikely to receive the necessary attention'' we included the proofs in the main part of the paper. \\n\\nWe appreciate the positive comments of reviewer 1922, which served as orientation for which of the results could be most interesting to present here in detail. Further, we thank reviewer 1922 for the literature suggestions regarding RBMs with interactions within layers and training, but in the re-organized paper we elected not to treat these topics.\"}", "{\"title\": \"review of Discrete Restricted Boltzmann Machines\", \"review\": \"This paper reviews properties of the Naive Bayes models and Binary RBMs before moving on to introducing discrete RBMs for which they extend universal approximation and other properties.\\n\\nI think such a review and extensions are extremely interesting for the more theoretical fields such as algebraical geometry. As it is, the paper does not cater to a machine learning crowd as it's mostly a sequence of mathematical definitions and theorems statements. I advise the authors to either:\\n- submit it to an algebraic geometry venue\\n- give as many intuitions as possible to help the reader get a full grasp on the results presented.\\n\\nFor the latter point, I advise against using sentences such as 'In algebraic geometrical terms this is a Hadamard product of a collection of secant varieties of the Segre embedding of the product of a collection of projective spaces'. Though it sounds incredibly intelligent, I didn't get anything from it, despite my fair knowledge of RBMs.\\n\\nThis work of explaining the results is done fairly well in the Results section, especially for the universal approximation property and the approximation error. This is a good target for the review part of the paper.\"}", "{\"review\": \"To the reviewers of this paper,\\n\\nThere appear to be some disagreement of the utility of the contributions of this paper to a machine learning audience. \\n\\nPlease read over the comments of the other reviewers and submit comment as you see fit.\"}", "{\"title\": \"review of Discrete Restricted Boltzmann Machines\", \"review\": \"This paper presents a comprehensive theoretical discussion on the approximation properties of discrete restricted Boltzmann machines. The paper is clearly written. It provides a contextual introduction to the theoretical results by reviewing approximation results for Naive Bayes models and binary restricted Boltzmann machines. Section 4 of the paper lists the theoretical contributions, while proofs are are delayed to the appendix.\\n\\nNotably, the first result gives conditions, based on the number of hidden and visible units together with their cardinalities, for the joint RBM to be a universal approximator of distributions over the visible units. The theorem provides an extension to previous results for binary RBMs. The second result shows that discrete RBMs can represent distributions with a number of strong modes that is exponential in the number of hidden units, but not necessarily exponential in the number of parameters. The third result shows that discrete RBMs can approximate any mixture of product distributions, with disjoint supports, arbitrarily well.\\n\\nProposition 10 is a nice result showing that a discrete RBM is a Hadamard product of mixtures of product distributions. These decompositions often help with the design of inference algorithms. Lemma 25 provides useful connections between RBMs and mixtures. Subsequently theorem 27 discusses the relation to exponential families.\\nTheorem 29 provides a very nice approximation bound for the KL divergence between the RBM and a distribution in the set of all distributions over the discrete state space, and so on. The paper also presents a geometry analysis but I did not follow all the appendix details about these.\\n\\nFinally the appendices discuss interactions within layers and training. With regard to the first issue, I think the authors should consult\\n\\nH. J. Kappen. Deterministic learning rules for Boltzmann machines. Neural Networks, 8(4):537-548, 1995\\n\\nwhich discusses these lateral connections and approximation properties. With regard to training, I recommend the following expositions to the authors. The last one considers a different aspect of the theory of RBMS, namely statistical efficiency of the estimators:\\n\\nMarlin, Benjamin, Kevin Swersky, Bo Chen, and Nando de Freitas. 'Inductive principles for restricted boltzmann machine learning.' In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 509-516. 2010.\\n\\nTieleman, Tijmen, and Geoffrey Hinton. 'Using fast weights to improve persistent contrastive divergence.' In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1033-1040. ACM, 2009.\\n\\nMarlin, Benjamin, and Nando de Freitas. 'Asymptotic Efficiency of Deterministic Estimators for Discrete Energy-Based Models'. UAI 2011. \\n\\nThe above provide a more clear picture of stochastic maximum likelihood as well as deterministic estimators.\", \"minor\": \"Why does your paper end with a b?\\n\\nIn remark 5. It might be easier to simply use x throughout instead of v.\"}", "{\"title\": \"review of Discrete Restricted Boltzmann Machines\", \"review\": \"The paper provides a theoretical analysis of Restricted Boltzmann Machines with multivalued discrete units, with the emphasis on representation capacity of such models.\\n\\nDiscrete RBMs are a special case of exponential family harmoniums introduced by Welling et al. [1] and have been known under the name of multinomial or softmax RBMs. The parameter updates given in the paper, which are its only not purely theoretical contribution, are not novel and have been known for some time. Though the authors claim that their analysis can serve as a starting point for developing novel machine learning algorithms, I am unable to see how that applies to any of the results in the paper. Thus the only contributions of the paper are theoretical.\\n\\nUnfortunately, those theoretical contributions do not seem particularly interesting, at least from the machine learning perspective, appearing to be direct generalizations of the corresponding results for binary RBMs. The biggest problem with the paper, however, is presentation. The paper is clearly not written for a machine learning audience. The presentation is extremely technical and even the 'non-technical' outline in Section 4 is difficult to follow. Given that the only novel contribution of the paper is the results proved in it, it is unreasonable to put all the proof in the supplementary material where they are unlikely to receive the necessary attention. The fact that the proofs will not fit in the paper due to the ICLR page limit, simply highlights the fact that this paper should be submitted to a journal.\\n\\n[1] Welling, M., Rosen-Zvi, M., & Hinton, G. (2005). Exponential family harmoniums with an application to information retrieval. Advances in Neural Information Processing Systems, 17, 1481-1488.\"}" ] }
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
ICLR.cc/2013/conference
2013
{ "note_id": [ "37JmPPz9dT39G", "uEQsuu1xiBueM", "iPpSPn9bTwn4Y", "LuEnLatTnvu1A", "aaN5bD_cRqbLk", "XXo-vXWa-ZvQL", "ttBP0QO8pKtvq", "_MfuTMZ4u7mWN", "j5Y_3gJAHK3nP", "wiYbiqRc-GqXO", "26sD6qgwF8Vob", "0mPCmj67CX0Ti" ], "note_type": [ "comment", "review", "review", "comment", "review", "review", "review", "review", "comment", "review", "review", "review" ], "note_created": [ 1363216920000, 1362372600000, 1363291260000, 1363216800000, 1363216740000, 1363288920000, 1361998920000, 1364251020000, 1363217040000, 1362084780000, 1362404760000, 1364262660000 ], "note_signatures": [ [ "Razvan Pascanu, Yoshua Bengio" ], [ "anonymous reviewer 6f71" ], [ "anonymous reviewer 6f71" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu, Yoshua Bengio" ], [ "anonymous reviewer 6a77" ], [ "anonymous reviewer 6a77" ], [ "Razvan Pascanu, Yoshua Bengio" ], [ "Razvan Pascanu, Yoshua Bengio" ], [ "anonymous reviewer 1939" ], [ "anonymous reviewer 1939" ] ], "structured_content_str": [ "{\"reply\": [\"We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf\", \"Regarding the title, we have changed it to 'Revisiting Natural Gradient for Deep Networks', to better reflect the scope of the paper. We would like to thank you for the pointers on natural gradient for reinforcement learning and stochastic search (they have been incorporated in the new version).\", \"We have provided more details of our derivation in section 2. We've added a plot describing path taken by different learning algorithms in parameter space (natural gradient, gradient descent, Newton's method, Le Roux's natural gradient). The new plot can be found at the beginning of section 4. If you have another suggestion of drawing that would help to illustrate what is going on, we would be pleased to know about it.\", \"As you suggested, we have fixed the statement about the confusion between the two version of natural gradients in the past literature.\", \"We have slightly rephrased our arguments in section 6 to better point out our intuitions. Indeed the approximations involved in the algorithms have an important role in its behaviour and we clarified this.\", \"However we were trying to say something different. We rephrased our argument as follows : robustness comes from the fact that we move in the direction of low variance for (d p(y|x))/(d theta) (the Fisher Information matrix is the uncentered covariance of these gradients). These are not the gradients of the error we minimize, but we argue that directions of high variance for these gradients, are directions of high variance for the gradients of the error function as well. Our reasoning is as follows. If moving in a direction `delta` would cause a large change in the gradient dL/dtheta, where L is the cost, this means that L(theta+delta) has to be 'very' different for different inputs.\", \"But since L(theta) is just the evaluation of p(y|x) for particular values of\", \"y for given x, this means that if L varies with x so does p(y|x).\", \"This means `delta` has to be a direction of high variance for the metric. This is true even if you move with infinitesimal speed, as it is more about picking the direction of low variance. This formulation is based on the the same argument you provided yourself regarding our early-overfitting experiment. Note that large variations in p(y|x) should be reflected in large curvature of the KL, as it indicates that p changes quickly. We originally formulated the argument around this large changes of p. We agree however that our original argument could have been clearer and more complete, and we hope it is clearer in the new version.\", \"Regarding the use of unlabeled data, we added the proposed citation.\", \"We have provided both pseudo code in the appendix, and we have made the code available at [email protected]:pascanur/natgrad.git\", \"Regarding the early-overfitting experiment, we agree with the reviewer that natural gradient reduces variance overall. In terms of relative variance it seems that it does not make a big difference. We however emphasize that reducing overall variance is important, as it makes learning overall less sensitive to the order of training examples (which in some sense is related to the original problem, in the sense that it also reduces the importance of the early examples to the beheviour of the trained model). We agree that the original focus of the section was sligthly off, and we changed it to addressing the sensitivity of the model to the examples it sees during training. Thanks again for the comment.\", \"We did use a grid search for the other experiments, and used for each\", \"algorithm the point on the grid that had the smallest validation error (a detail that we explicitly say in the paper).\", \"We are however in the process of improving those results by extending our grid.\"]}", "{\"title\": \"review of Natural Gradient Revisited\", \"review\": \"Summary\\n\\nThe paper reviews the concept of natural gradient, re-derives it in the context of neural network training, compares a number of natural gradient-based algorithms and discusses their differences. The paper's aims are highly relevant to the state of the field, and it contains numerous valuable insights. Precisely because of its topic's importance, however, I deplore its lack of maturity, especially in terms of experimental results and literature overview. \\n\\n\\nComments\\n\\n-- The title raises the expectation of a review-style paper with a broad literature overview on the topic, but that aspect is underdeveloped. A paper such as this would be a perfect opportunity to relate natural gradient-related work in neural networks to closely related approaches in reinforcement learning [1,2] and stochastic search [3].\\n\\n-- The discussion in section 2 is correct and useful, but would benefit enormously from an illustrative figure that clarifies the relation between parameter space and distribution manifold, and how gradient directions differ in both. The last sentence (Lagrange method) is also breezing over a number of details that would benefit from a more explicit treatment.\\n\\n-- There is a recurring claim that gradient-covariances are 'usually confused' with Fisher matrices. While there are indeed a few authors who did fall victim to this, it is not a belief held by many researchers working on natural gradients, please reformulate.\\n\\n-- The information-geometric manifold is generally highly curved, which means that results that hold for infinitesimal step-sizes do not generally apply to realistic gradient algorithms with large finite steps. Indeed, [4] introduces an information-geometric 'flow' and contrasts it with its finite-step approximations. It is important to distinguish the effect of the natural gradient itself from the artifacts of finite-step approximations, indeed the asymptotic behavior can differ, see [5]. A number of arguments in section 6 could be revised in this light.\\n\\n-- The idea of using more data to estimate the Fisher information matrix (because if does not need to be labeled), compared to the data necessary for the steepest gradient itself, is promising for semi-supervised neural network training. It was previously was presented in [3], in a slightly different context with infinitely many unlabeled samples.\\n\\n-- The new variants of natural gradient descent should be given in pseudocode in the appendix, and if possible even with a reference open-source implementation in the Theano framework.\\n\\n-- The experiment presented in Figure 2 is very interesting, although I disagree with the conclusions that are derived from it: the variance is qualitatively the same for both algorithms, just rescaled by roughly a factor 4. So, relatively speaking, the influence of early samples is still equally strong, only the generic variability of the natural gradient is reduced: plausibly by the effect that the Fisher-preconditioning reduces step-sizes in directions of high variance.\\n\\n-- The other experiments, which focus on test-set performance, have a major flaw: it appears each algorithm variant was run exactly once on each dataset, which makes it very difficult to judge whether the results are significant. Also, the effect of hyper-parameter-tuning on those results is left vague.\\n\\n\\n\\nMinor points/typos\\n-- Generally, structure the text such that equations are presented before they are referred to, this makes for a more linear reading flow.\\n-- variable n is undefined\\n-- clarify which spaces the variables x, z, t, theta live in.\\n-- 'three most typical'\\n-- 'different parametrizations of the model'\\n-- 'similar derivations'\\n-- 'plateaus' \\n-- axes of figures could be homogenized.\\n\\nReferences\\n[1] 'Natural policy gradient', Kakade, NIPS 2002.\\n[2] 'Natural Actor-Critic', Peters and Schaal, Neurocomputing 2008.\\n[3] 'Stochastic Search using the Natural Gradient', Sun et al, ICML 2009.\\n[4] 'Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles', Arnold et al, Arxiv 2011.\\n[5] 'Natural Evolution Strategies Converge on Sphere Functions', Schaul, GECCO 2012.\"}", "{\"review\": \"I read the updated version of the paper. I has indeed been improved substantially, and my concerns were addressed. It should clearly be accepted in its current form.\"}", "{\"reply\": [\"We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf\", \"Regarding the differences between equation (1) and equation (7), it comes from moving from p(z) to the conditional p(y|x). This is emphasized in the text introducing equation (7), explaining in more details how one goes from (1) to (7).\", \"Regarding the final arguments and overall presentation of the arguments in the paper, we have reworked the overall writeup of the paper in a way that you will hopefully find satisfactory.\"]}", "{\"review\": \"We would like to thank all the reviewers for their feedback and insights. We had submitted a new version of the paper (it should appear on arxiv on Thu, 14 Mar 2013 00:00:00 GMT, though it can be retrieved now from\", \"http\": \"//www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf\\n\\n We kindly ask the reviewers to look at. The new paper \\ncontains drastic changes that we believe will improve the \\nquality of the paper. In a few bullet points the changes are: \\n\\n* The title of the paper was changed to reflect our focus on natural\\ngradient for deep neural networks\\n* The wording and structure of the paper was slightly changed to better\\nreflect the final conclusions\\n* We improved notation, providing more details where they were missing\\n* Additional plots were added as empirical proof to some of our hypotheses\\n* We've added both the pseudo-code as well as link to a Theano-based\\nimplementation of the algorithm\"}", "{\"review\": \"The revised arxiv paper is available now, and we replied to the reviewers comments.\"}", "{\"title\": \"review of Natural Gradient Revisited\", \"review\": [\"GENERAL COMMENTS\", \"The paper promises to establish the relation between Amari's natural gradient and many methods that are called Natural Gradient or can be related to Natural Gradient because they use Gauss-Newton approximations of the Hessian. The problem is that I find the paper misleading. In particular the G of equation (1) is not the same as the G of equation (7). The author certainly points out that the crux of the matter is to understand which distribution is used to approximate the Fisher information matrix, but the final argument is a mess. This should be done a lot more rigorously (and a lot less informally.) As the paper stands, it only increases the level of confusion.\", \"SPECIFIC COMMENTS\", \"(ichi Amari, 1997) - > (Amari, 1997)\", \"differ -> defer\", \"Due to this surjection: A surjection is something else!\", \"Equation (1): please make clear that the expectation is an expectation on z distributed according p_theta (not the ground truth nor the empirical distribution). Equation (7) then appears to be a mix of both.\", \"becomes the conditional p_\\theta(t|x) where q(x) represents: where is q in p_\\theta(t|x)\"]}", "{\"review\": [\"Clearly, the revised paper is much better than the initial paper to the extent that it should be considered a different paper that shares its title with the initial paper. The ICLR committee will have to make a policy decision about this.\", \"The revised paper is poorly summarized by it abstract because it does not show things in the same order as the abstract. The paper contains the following:\", \"A derivation of natural gradient that does not depend on information geometry. This derivation is in fact well known (and therefore not new.)\", \"A clear discussion of which distribution should be used to compute the natural gradient Riemannian tensor (equation 8). This is not new, but this is explained nicely and clearly.\", \"An illustration of what happens when one mixes these distributions. This is not surprising, but nicely illustrates the point that many so-called 'natural gradient' algorithms are not the same as Amari's natural gradient.\", \"A more specific discussion of the difference between LeRoux 'natural gradient' and the real natural gradient with useful intuitions. This is a good clarification.\", \"A more specific discussion of how many second order algorithms using the Gauss-Newton approximation are related to some so-called natural gradient algorithms which are not the true natural gradient. Things get confusing because the authors seem committed to calling all these algorithms 'natural gradient' despite their own evidence.\", \"In conclusion, although novelty is limited, the paper disambiguates some of the confusion surrounding natural gradient. I simply wish the authors took their own hint and simply proposed banning the words 'natural gradient' to describe things that are not Amari's natural gradient but are simply inspired by it.\"]}", "{\"reply\": \"We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf\\n\\n* Regarding the relationship between Hessian-Free and natural\\ngradient, it stems from the fact that algebraic manipulations of the \\nextended Gauss-Newton approximation of the Hessian result in the natural gradient metric. Due to space limit (the paper is quite lengthly) we do not provide all intermediate steps in this algebraic manipulation, but we do provide all the crucial ones. Both natural gradient and Hessian Free have the same form (the gradient is multiplied by the inverse of a matrix before being subtracted from theta, potentially times some scalar learning rate). \\nTherefore showing that both methods use the same matrix is sufficient \\nto show that HF can be interpreted as natural gradient. \\n\\n* The degeneracy of theta were meant to suggest only that we are dealing with a lower dimensional manifold. We completely agree however that the text was confusing and it was completely re-written to avoid that potential confusion. In the re-write we've removed this detail as it is not crucial for the paper. \\n\\n* The relations at the end of page 2 do hold in general, as the expectation is taken over z (a detail that we specify now). We are not using a fully Bayesian framework, i.e. theta is not a random variable in the text. \\n\\n* Equation 15 was corrected. When computing the Hessian, we compute the derivatives with respect to `r`.\"}", "{\"review\": \"Thank you for your comments. We will soon push a revision to fix all the grammar and language mistakes you pointed out.\\n\\nRegarding equation (1) and equation (7), mathbf{G} represents the Fisher Information Matrix form of the metric resulting when you consider respectively p(x) vs p(y|x). Equation (1) is introduced in section 1, which presents the generic case of a family of distributions p_{\\theta}(x). From section 3 onwards we adapt these equations specifically to neural networks, where, from a probabilistic point of view, we are dealing with conditional probabilities p(y|x).\\n\\nCould you please be more specific regarding the elements of the paper that you found confusing?\\nWe would like to reformulate the conclusion to make our contributions clearer. The novel points we are trying to make are:\\n\\n(1) Hessian Free optimization and Krylov Subspace Descent, as long as they use the Gauss-Newton approximation of the Hessian, can be understood as Natural Gradient, because the Gauss-Newton matrix matches the metric of Natural Gradient (and the rest of the pipeline is the same).\\n(2) Possibly due to the regularization effect discussed in (6), we hypothesize and support with empirical results that Natural Gradient helps dealing with the early overfitting problem introduced by Erhan et al. This early overfitting problem might be a serious issue when trying to scale neural networks to large models with very large datasets.\\n(3) We make the observation that since the targets get integrated out when computing the metric of Natural Gradient, one can use unlabeled data to improve the accuracy of this metric that dictates the speed with which we move in parameter space.\\n(4) Natural Gradient introduced by Nicolas Le Roux et al has a fundamental difference with Amari's. It is not just a different justification, but a different algorithm that might behave differently in practice.\\n(5) Natural Gradient is different from a second order method because while one uses second order information, it is not the second order information of the error function, but of the KL divergence (which is quite different). For e.g. it is always positive definite by construction, while the curvature is not. Also, when considering the curvature of the KL, is not the curvature of the same surface throughout learning. At each step we have a different KL divergence and hence a different surface, while for second order methods the error surface stays constant through out learning. The second distinction is that Natural Gradient is naturally suited for online learning, provided that we have sufficient statistics to estimate the KL divergence (the metric). Theoretically, second order methods are meant to be batch methods (because the Hessian is supposed over the whole dataset) where the Natural Gradient metric only depends on the model.\\n(6) The standard understanding of Natural Gradient is that by imposing the KL divergence between p_{theta}(y|x) and p_{theta+delta}(y|x) to be constant it ensures that some amount of progress is done at every step and hence it converges faster. We add that it also ensures that you do not move too far in some direction (which would make the KL change quickly), hence acting as a regularizer.\\n\\nRegarding the paper not being formal enough we often find that a dry mathematical treatment of the problem does not help improving the understanding or eliminating confusions. We believe that we were formal enough when showing the equivalence between the generalized Gauss-Newton and Amari's metric. Point (6) of our conclusion is a hypothesis which we validate empirically and we do not have a formal treatment for it.\"}", "{\"title\": \"review of Natural Gradient Revisited\", \"review\": \"This paper attempts to reconcile several definitions of the natural gradient, and to connect the Gauss-Newton approximation of the Hessian used in Hessian free optimization to the metric used in natural gradient descent. Understanding the geometry of objective functions, and the geometry of the space they live in, is crucial for model training, and is arguably the greatest bottleneck in training deep or otherwise complex models. However, this paper makes a confused presentation of the underlying ideas, and does not succeed in clearly tying them together.\", \"more_specific_comments\": \"In the second (and third) paragraph of section 2, the natural gradient is discussed as if it stems from degeneracies in theta, where multiple theta values correspond to the same distribution p. This is inaccurate. Degeneracies in theta have nothing to do with the natural gradient. This may stem from a misinterpretation of the role of symmetries in natural gradient derivations? Symmetries are frequently used in the derivation of the natural gradient, in that the metric is frequently chosen such that it is invariant to symmetries in the parameter space. However, the metric being invariant to symmetries does not mean that p is similarly invariant, and there are natural gradient applications where symmetries aren't used at all. (You might find The Natural Gradient by Analogy to Signal Whitening, Sohl-Dickstein, http://arxiv.org/abs/1205.1828 a more straightforward introduction to the natural gradient.)\\n\\nAt the end of page 2, between equations 2 and 3, you introduce relations which certainly don't hold in general. At the least you should give the assumptions you're using. (also, notationally, it's not clear what you're taking the expectation over -- z? theta?)\\n\\nEquation 15 doesn't make sense. As written, the matrices are the wrong shape. Should the inner second derivative be in terms of r instead of theta?\\n\\nThe text has minor English difficulties, and could benefit from a grammar and word choice editing pass. I stopped marking these pretty early on, but here are some specific suggested edits:\\n'two-folded' -> 'two-fold'\\n'framework of natural gradient' -> 'framework of the natural gradient'\\n'gradient protects about' -> 'gradient protects against'\\n'worrysome' -> 'worrisome'\\n'even though is called the same' -> 'despite the shared name'\\n'differ' -> 'defer'\\n'get map' -> 'get mapped'\"}", "{\"review\": \"As the previous reviewer states, there are very large improvements in the paper. Clarity and mathematical precision are both greatly increased, and reading it now gives useful insight into the relationship between different perspectives and definitions of the natural gradient, and Hessian based methods. Note, I did not check the math in Section 7 upon this rereading.\\n\\nIt's misleading to suggest that the author's derivation in terms of minimizing the objective on a fixed-KL divergence shell around the current location (approximated as a fixed value of the second order expansion of the Fisher information) is novel. This is something that Amari also did (see for instance the proof of Theorem 1 on page 4 in Amari, S.-I. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2), 251\\u2013276. doi:10.1162/089976698300017746. This claim should be removed.\\n\\nIt could still use an editing pass, and especially improvements in the figure captions, but these are nits as opposed to show-stoppers (see specific comments below). This is a much nicer paper. My only significant remaining concerns are in terms of the Lagrange-multiplier derivation, and in terms of precedent setting. It would be that it's a dangerous precedent to set (and promises to make much more work for future reviewers!) to base acceptance decisions on rewritten manuscripts that differ significantly from the version initially submitted. So -- totally an editorial decision.\\n\\np. 2, footnote 2 -- 3rd expression should still start with sum_z\\n\\n'emphesis' -> emphasize'\\n\\n'to speed up' -> 'to speed up computations'\\n\\n'train error' -> 'training error'\\n\\nFigure 2 -- label panes (a) and (b) and reference as such. 'KL, different training minibatch' appears to be missing from Figure. In latex, use ` for open quote and ' for close quote. capitalize kl. So, for instance, `KL, unlabeled'\\n\\nFigure 3 -- Caption has significant differences from figure\\n\\nin most places where it occurs, should refer to 'the natural gradient' rather than 'natural gradient'\\n\\n'equation (24) from section 3' -- there is no equation 24 in section 3. Equation and Section should be capitalized.\"}" ] }
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
ICLR.cc/2013/conference
2013
{ "note_id": [ "_ia0VPOP0SVPj", "-wDkwa3mkYwTa", "55Sf5h7-bs1wC", "kk_CoX43Cfks-", "OOw6hkBUq_fEr", "cAhZAfXPZ6Sfw", "wy2cwQ8QPVybX", "PHnWHNpf5bHUO", "rafTmpD60FrZR" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1362189120000, 1362382860000, 1363408140000, 1363761180000, 1362793920000, 1362189120000, 1362377040000, 1363212720000, 1362497280000 ], "note_signatures": [ [ "anonymous reviewer 600b" ], [ "anonymous reviewer b2c5" ], [ "Luke Bornn, Yutian Chen, Nando de Freitas, Mareija Eskelin, Jing Fang, Max Welling" ], [ "Maya Baya" ], [ "Maya Baya" ], [ "anonymous reviewer 600b" ], [ "anonymous reviewer cf4e" ], [ "Art Owen" ], [ "anonymous reviewer 2d06" ] ], "structured_content_str": [ "{\"title\": \"review of Herded Gibbs Sampling\", \"review\": \"Herding is a relatively recent idea [23]: create a dynamical system that evolves a vector, which when time-averaged will match desired expectations. Originally it was designed as a novel means to generalize from observed data with measured moments. In this work, the conditional distributions of a Gibbs sampler are matched, with the hope of sampling from arbitrary target distributions.\\n\\nAs reviewed by the paper itself, this work joins only a small number of recent papers that try to simulate arbitrary target distributions using a deterministic dynamical system. Compared to [19] this work potentially works better in some situations: O(1/T) convergence can happen, whereas [19] seems to emulate a conventional Gibbs sampler with O(1/T^2) convergence. However, the current work seems to be more costly in memory and less-generally applicable than Gibbs sampling, because it needs to track weights for all possible conditional distributions (all possible neighbourhood settings for each variable) in some cases. The comparison to [7] is less clear, as that is motivated by O(1/T) QMC rates, but I don't know if/how it would compare to the current work. (No comparison is given.)\\n\\nOne of the features of Markov chain Monte Carlo methods, such as Gibbs sampling, is that represents _joint_ distributions, through examples. Unlike variational approximation methods, no simple form of the distribution is assumed, but Monte Carlo sampling may be a less efficient way to get marginal distributions. For example, Kuss and Rasmussen http://www.jmlr.org/papers/volume6/kuss05a/kuss05a.pdf demonstrated that EP gives exceedingly accurate posterior marginals with Gaussian process classifiers, even though its joint approximation, a Gaussian, is obviously wrong. The experiment in section 4.1 suggests that the herded Gibbs procedure is prepared to move through low probability joint settings more often than it 'should', but gets better marginals as a result. The experiment section 4.2 also depends only on low-dimensional marginals (as many applications do). The experiment in section 4.3 involves an optimization task, and I'm not sure how herded Gibbs was applied (also with annealing? The most probable sample chosen? ...).\\n\\nThis is an interesting, novel paper, that appears technically sound. The most time-consuming research contributions are the proofs in the appendices, which seem plausible, but I have not carefully checked them. As discussed in the conclusion, there is a gap between the applicability of this theory and the applicability of the methods. But there is plenty in this paper to suggest that herded sampling for generic target distributions is an interesting direction.\\n\\nAs requested, a list of pros and cons:\", \"pros\": [\"a novel approach to sampling from high-dimensional distributions, an area of large interest.\", \"Good combination of toy experiments, up to fairly realistic, but harder to understand, demonstration.\", \"Raises many open questions: could have impact within community.\", \"Has the potential to be both general and fast to converge: in long term could have impact outside community.\"], \"cons\": [\"Should possibly compare to Owen's work on QMC and MCMC. Although there may be no interesting comparison to be made.\", \"The most interesting example (NER, section 4.3) is slightly hard to understand. An extra sentence or two could help greatly to state how the sampler's output is used.\", \"Code could be provided.\"], \"very_minor\": \"paragraph 3 of section 5 should be rewritten. It's wordy: 'We should mention...We have indeed studied this', and uses jargon that's explained parenthetically in the final sentence but not in the first two.\"}", "{\"title\": \"review of Herded Gibbs Sampling\", \"review\": \"This paper shows how Herding, a deterministic moment-matching algorithm, can be used to sample from un-normalized probabilities, by applying Herding to the full-conditional distributions. The paper presents (1) theoretical proof of O(1/T) convergence in the case of empty and fully-connected graphical models, as well as (2) empirical evidence, showing that Herded Gibbs sampling outperforms both Gibbs and mean-field for 2D structured MRFs and chain structured CRFs. This improved performance however comes at the price of memory, which is exponential in the maximum in-degree of the graph, thus making the method best suited to sparsely connected graphical models.\\n\\nWhile the application of Herding to sample from joint-distributions through its conditionals may not appear exciting at first glance, I believe this represents a novel research direction with potentially high impact. A 1/T convergence rate would be a boon in many domains of application, which tend to overly rely on Gibbs sampling, an old and often brittle sampling algorithm. The algorithm's exponential memory requirements are somewhat troubling. However, I believe this can be overlooked given the early state of research and the fact that sparse graphical models represent a realistic (and immediate) domain of application.\\n\\nThe paper is well written and clear. I unfortunately cannot comment on the correctness of the convergence proofs (which appear in the Appendix), as those proved to be too time-consuming for me to make a professional judgement on. Hopefully the open review process of ICLR will help weed out any potential issues therein.\", \"pros\": [\"A novel sampling algorithm with faster convergence rate than MCMC methods.\", \"Another milestone for Herding: sampling for un-normalized probabilities\", \"(with tractable conditionals).\", \"Combination of theoretical proofs (when available) and empirical evidence.\", \"Experiments are thorough and span common domains of application: image denoising through MRFs and Named Entity Recognition through chain-CRFs.\"], \"cons\": [\"Convergence proofs hold for less than practicle graph structures.\", \"Exponential memory requirements of the algorithm make Herded Gibbs sampling impractical for lage families of graphical models, including Boltzmann Machines.\"]}", "{\"review\": \"Taking the reviewers' comments into consideration, and after many useful email exchanges with experts in the field including Prof Art Owen, we have prepared a newer version of the report. If it is not on Arxiv by the time you read this, you can find it at\", \"http\": \"//www.cs.ubc.ca/~nando/papers/herding_ICLR.pdf\", \"reviewer_anonymous_600b\": \"We have made the code available and expanded our description of the CRF for NER section. An empirical comparison with the work of Art Owen and colleagues was not possible given the short time window of this week. However, we engaged in many discussions with Art and he not only added his comments here in open review, but also provided many useful comments via email. One difference between herding and his approach is that herding is greedy (that is, the random sequence does not need to be constructed beforehand). Art also pointed us out to the very interesting work of James Propp and colleagues on Rotor-Router models. Please see our comments in the last paragraph of the Conclusions and Future Work section of the new version of the paper. Prof Propp has also begun to look at the problem of establishing connections between herding and his work.\", \"reviewer_anonymous_cf4e\": \"For marginals, the convergence rate of herded Gibbs is also O(1/T) because marginal probabilities are linear functions of the joint distribution. However, in practice, we observe very rapid convergence results for the marginals, so we might be able to strengthen these results in the future.\", \"reviewer_anonymous_2d06\": \"We have added more detail to the CRF section and made the code available so as to ensure that our results are reproducible.\\n\\nWe thank all reviewers for excellent comments. This openreview discussion has been extremely useful and engaging.\\n\\nMany thanks,\\nThe herded Gibbs team\"}", "{\"review\": \"The updated Herded Gibbs report is now available on arxiv at the following url:\", \"http\": \"//arxiv.org/abs/1301.4168v2\\n\\nThe herded Gibbs team.\"}", "{\"review\": \"Dear reviewers,\\n\\nThank you for the encouraging reviews and useful feedback. We will soon address your questions and comments.\\n\\nTo this end, we would like to begin by announcing that the code is available online in both matlab and python, at:\", \"http\": \"//www.mareija.ca/research/code/\\n\\nThis code contains both the image denoising experiments and the two node example, however, we have omitted the NER experiment because the code is highly dependent on the Stanford NER software. Nonetheless, upon request, we would be happy to share this more complex code too.\\n\\nA comprehensive reply and a newer version of the arxiv paper addressing your concerns will appear soon.\\n\\nIn the meantime, we look forward to further comments.\\n\\nThe herded Gibbs team.\"}", "{\"title\": \"review of Herded Gibbs Sampling\", \"review\": \"Herding is a relatively recent idea [23]: create a dynamical system that evolves a vector, which when time-averaged will match desired expectations. Originally it was designed as a novel means to generalize from observed data with measured moments. In this work, the conditional distributions of a Gibbs sampler are matched, with the hope of sampling from arbitrary target distributions.\\n\\nAs reviewed by the paper itself, this work joins only a small number of recent papers that try to simulate arbitrary target distributions using a deterministic dynamical system. Compared to [19] this work potentially works better in some situations: O(1/T) convergence can happen, whereas [19] seems to emulate a conventional Gibbs sampler with O(1/T^2) convergence. However, the current work seems to be more costly in memory and less-generally applicable than Gibbs sampling, because it needs to track weights for all possible conditional distributions (all possible neighbourhood settings for each variable) in some cases. The comparison to [7] is less clear, as that is motivated by O(1/T) QMC rates, but I don't know if/how it would compare to the current work. (No comparison is given.)\\n\\nOne of the features of Markov chain Monte Carlo methods, such as Gibbs sampling, is that represents _joint_ distributions, through examples. Unlike variational approximation methods, no simple form of the distribution is assumed, but Monte Carlo sampling may be a less efficient way to get marginal distributions. For example, Kuss and Rasmussen http://www.jmlr.org/papers/volume6/kuss05a/kuss05a.pdf demonstrated that EP gives exceedingly accurate posterior marginals with Gaussian process classifiers, even though its joint approximation, a Gaussian, is obviously wrong. The experiment in section 4.1 suggests that the herded Gibbs procedure is prepared to move through low probability joint settings more often than it 'should', but gets better marginals as a result. The experiment section 4.2 also depends only on low-dimensional marginals (as many applications do). The experiment in section 4.3 involves an optimization task, and I'm not sure how herded Gibbs was applied (also with annealing? The most probable sample chosen? ...).\\n\\nThis is an interesting, novel paper, that appears technically sound. The most time-consuming research contributions are the proofs in the appendices, which seem plausible, but I have not carefully checked them. As discussed in the conclusion, there is a gap between the applicability of this theory and the applicability of the methods. But there is plenty in this paper to suggest that herded sampling for generic target distributions is an interesting direction.\\n\\nAs requested, a list of pros and cons:\", \"pros\": [\"a novel approach to sampling from high-dimensional distributions, an area of large interest.\", \"Good combination of toy experiments, up to fairly realistic, but harder to understand, demonstration.\", \"Raises many open questions: could have impact within community.\", \"Has the potential to be both general and fast to converge: in long term could have impact outside community.\"], \"cons\": [\"Should possibly compare to Owen's work on QMC and MCMC. Although there may be no interesting comparison to be made.\", \"The most interesting example (NER, section 4.3) is slightly hard to understand. An extra sentence or two could help greatly to state how the sampler's output is used.\", \"Code could be provided.\"], \"very_minor\": \"paragraph 3 of section 5 should be rewritten. It's wordy: 'We should mention...We have indeed studied this', and uses jargon that's explained parenthetically in the final sentence but not in the first two.\"}", "{\"title\": \"review of Herded Gibbs Sampling\", \"review\": \"Herding has an advantage over standard Monte Carlo method, in that\\nit estimates some statistics quickly, while Monte Carlo methods\\nestimate all statistics but more slowly.\\n\\nThe paper presents a very interesting but impractical attempt to\\ngeneralize Herding to Gibbs sampling by having a 'herding chain' for\\neach configuration of the Markov blanket of the variables. In\\naddition to the exponential memory complexity, it seems like the\\nmethod should have an exponentially large constant hidden in the\\nO(1/T) convergence rate: Given that there are many herding chains,\\neach herding parameter would be updated extremely infrequently, which\\nwould result in an exponential slowdown of the Herding effect and thus\\nincrease the constant in O(1/T). And indeed, lambda from theorem 2\\nhas a 2^N factor.\\n\\nThe theorem is interesting in that it shows eventual O(1/T)\", \"convergence_in_full_distribution\": \"that is, the empirical joint\\ndistribution eventually converges to the full joint distribution.\\nHowever, in practice we care about estimating marginals and not\\njoints. Is it possible to show fast convergence on every subset of\\nthe marginals, or even on the singleton variables? Can it be done\\nwith a favourable constant? Can such a result be derived from the\\ntheorems presented in the paper? Results about marginals would \\nbe of more practical interest.\\n\\nThe experiments show that the idea works in principle, which is good.\\n\\nIn its current form, the paper presents a reasonable idea but is\\nincomplete, since the idea is too impractical. It would be great if\\nthe paper explored a practical implementation of Gibbs herding, even\\nan is approximate one. For example, would it be possible to represent\\nw_{X_{Ni}} with a big linear function A X_{Ni} for all X and to herd\\nA, instead of slowly herding the various W_{X_{Ni}}? Would it work?\\nWould it do something sensible on the experiments? Can it be proved\\nto work in a special case?\\n\\nIn conclusion, the paper is very interesting and should be accepted. Its weakness is the general impracticality of the method.\"}", "{\"review\": \"Nando asked me for some comments and then he thought I should share them on openreview. So here they are.\\n\\nThere are a few other efforts at replacing the IID numbers which drive MCMC. It would be interesting to explore the connections among them. Here is a sample:\\n\\nJim Propp and others have been working on rotor-routers for quite a while. Here is one link:\", \"http\": \"//arxiv.org/abs/1303.2423\\nthat is similar to herding. The idea there is to make a followup sample of values that fill in holes left after a first sampling.\\n\\nNot quite as close to this work but still related is the array-RQMC work of Pierre L'Ecuyer and others. See for instance:\\nwww.iro.umontreal.ca/~lecuyer/myftp/papers/mcqmc08-array.pdf\", \"there_is_some_very_recent_work_by_dick_rudolf_and_zhu\": \"\"}", "{\"title\": \"review of Herded Gibbs Sampling\", \"review\": [\"The paper presents a deterministic 'sampling' algorithm for unnormalized distributions on discrete variables, similar to Gibbs sampling, which operates by matching the statistics of the conditional distribution of each node given its Markov blanket. Proofs are provided for the independent and fully-connected cases, with an impressive improvement in asymptotic convergence rate to O(1/T) over O(1/sqrt(T)) available from Monte Carlo methods in the fully-connected case. Experimental results demonstrate herded Gibbs outperforming traditional Gibbs sampling in the sparsely connected case, a regime unfortunately not addressed by the provided proofs. The algorithm's Achilles heel is its prohibitive worst-case memory complexity, scaling exponentially with the maximal node degree of the network.\", \"The paper is compelling for its demonstration that a conceptually simple deterministic procedure can (in some cases at least) greatly outperform Gibbs sampling, one of the traditional workhorses of Monte Carlo inference, both asymptotically and empirically. Though the procedure in its current form is of little use in large networks of even moderate edge density, the ubiquity of application domains involving very sparse interaction graphs makes this already an important contribution. The proofs appear to be reasonable upon cursory examination, but I have not as yet verified them in detail.\", \"PROS\", \"A lucidly explained idea that gives rise to somewhat surprising theoretical results.\", \"Proofs of convergence as well as experimental interrogations.\", \"A step towards practical herding algorithms for dense unnormalized models, and an important milestone for the literature on herding in general.\", \"CONS\", \"An (acknowledged) disconnect between theory and practice -- available proofs apply only in cases that are uninteresting or impractical.\", \"Experiments in 4.3 make mention of NER with skip-chain CRFs, where Viterbi is not tractable, but resorts to experiments with chain CRFs instead. An additional experiment utilizing skip-chain CRFs (a more challenging inference task, not amenable to Viterbi) would have been more compelling, though I realize space is at a premium.\"], \"minor_concerns\": [\"The precise dimensionality of the image denoising problem is, as far as I can tell, never specified. This would be nice to know.\", \"More details as to how the herded Gibbs procedure maps onto the point estimate provided as output on the NER task would be helpful -- presumably the single highest-probability sample is used?\"]}" ] }
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
ICLR.cc/2013/conference
2013
{ "note_id": [ "wX4ew9_0vK2CA", "TiDHTEGclh1ro", "nMIynqm1yCndY", "PJcXvClTX8vdE", "L8RreQWdPS3jz", "D5ft5XCZd1cZw", "lLgil9MwiZ3Vu", "OblAf-quHwf1V", "6s7Ys8Q5JbfHZ", "MF7RMafDRkF_A" ], "note_type": [ "comment", "review", "review", "comment", "review", "review", "review", "comment", "review", "comment" ], "note_created": [ 1363246260000, 1362381600000, 1362784440000, 1363246140000, 1363278840000, 1361980800000, 1363246680000, 1363246380000, 1362262800000, 1363246320000 ], "note_signatures": [ [ "Çağlar Gülçehre" ], [ "anonymous reviewer 858d" ], [ "David Reichert" ], [ "Çağlar Gülçehre" ], [ "Çağlar Gülçehre" ], [ "anonymous reviewer ed64" ], [ "Çağlar Gülçehre" ], [ "Çağlar Gülçehre" ], [ "anonymous reviewer dfef" ], [ "Çağlar Gülçehre" ] ], "structured_content_str": [ "{\"reply\": \"> The exposition on curriculum learning could be condensed. (minor) The demonstrative problem (sprite counting) is a visual perception problem and therefore carries with it the biases of our own perception and inferred strategies. Maybe the overall argument might be bolstered by the addition of a more abstract example?\\n\\nYes that's right. We have performed an experiment in which all the possible bit configurations of a single patch were enumerated (there are 80 of them plus the special case where no object is present). With this input representation, there is no vision prior knowledge that can help a human learner. The task is strangely easier but still difficult: we achieved 25% test error up to now, i.e., better than chance (i.e. 50%) but far from the less than 1% error of the IKGNN. In future work, we will measure how well humans learn that task.\\n\\n> why so many image regions? Why use an 8x8 grid? Won't 3 regions suffice to make the point? Or is this related to the complexity of the problem. A related question: how are the results effected by the number of these regions? Maybe some reduced tests at the extremes would be interesting, i.e. with only 3 regions, and 32 (you have 64 already)?\\n\\nThe current architecture is trained on 64=8x8 patches, with sprites centered inside the patches. We have also tried to train the IKGNN on 16x16 patches, corresponding to 4x4=16 patches in a same-size image, and we allowed the objects to be randomly translated inside the patch but IKGNN couldn't learn the task. Probably because of the translation, the P1NN required a convolutional architecture.\\n\\nWe have also conducted experiments with the tetromino dataset (which is not in that paper) that has 16x16 images and objects are placed in 4x4 patches with less variations and 7 sprite categories. An ordinary MLP with 3 tanh hidden layers was able to learn this task after a very long training.\\n\\nWe trained Structured MLP on the 3 patches(8x8) that has sprite in it for 120 training epochs with 100k\\ntraining examples. The best result we could get with that setting is 37 percent error on training\\nset and SMLP was still doing chance on the test set.\\n\\nIn a nutshell, reducing the number of regions and centering the objects inside each sprite in those regions implies reducing the complexity of the problem and yes if you reduce the complexity of the problem, you reduce the complexity of the task and you start seeing models that can learn it, with ordinary MLPs learning the task after a very long training on a large training set. \\n\\n> In the networks that solve the task, are the weights that are learned symmetric over the image regions? i.e. are these weights identical (maybe up to some scaling and sign flip). Is there anything you have determined about the structure of the learned second layer of the IKGNN?\\n\\nIn the first layer on each patch, we trained exactly the same (first level) MLP on each patch, while the second level MLP is trained on the standardized softmax probabilites of the first level. Hence the weights are shared across patches in the first level. The first level of IKGNN (P1NN) has translation equivariance, but the second level (P2NN) is fully-connected and does not have any prior knowledge of symmetries.\\n\\n> Furthermore, what about including a 'weight sharing' constraint in the general MLP model (the one that does not solve the problem, but has the same structure as the one that does)? Would including this constraint change the solution? (the constraint is already in the P1NN, but what about adding it into the P2NN?) Another way to ask this is: Is enforcing translation invariance in the network sufficient to achieve good performance, or do we need to specifically train for the sprite discrimination?\\n\\nIndeed, it would be possible to use a convolutional architecture (with pooling, because the output is for the whole image) for the second level as well. We have not tried that yet but we agree that it would indeed be an interesting possibility and we certainly plan to try it out. Up to now, though, we have found that enforcing translation equivariance (in the lower level) was important but not sufficient to solve the problem. Indeed, the poor result obtained by the structured MLP demonstrates that.\\n\\n> Do we know if humans can solve this problem 'in a glance?': flashing the image for a small amount of time ~100-200msecs. Either with or without a mask? It seems that the networks you have derived are solving such a problem 'in a glance.'\\n\\nWe didn't conduct any trials for measuring response times and learning speed of human subjects on this dataset. However, we agree such a study would be an important follow-up to this paper.\\n\\n\\n> Is there an argument to be made that the sequential nature of language allows humans to solve this task? Even the way you formulate the problem suggests this sequential process: 'are all of the sprites in the image the same?': in other words 'find the sprites, then decide if they are the same' When I imagine solving this problem myself, I imagine performing a more sequential process: look at one sprite, then the next, (is it the same?, if it is): look at the next sprite (is it the same?). I know that we can consider this problem to be a concrete example of a more abstract learning problem, but it's not clear if humans can solve such problems without sequential processing. Anyway, this is not a criticism, per se, just food for thought.\\n\\nYes we agree that the essence of the tasks requires a sequential processing and you can also find this sequential processing in our IKGNN architecture as well (and in deep architectures in general). P1NN looks at each patch and identifies the type of objects inside that patch and P2NN decides if the objects identified by P1NN has a different object. What is less clear is whether humans solve such problems by re-using the same 'hardware' (as in a recurrent net) or by composing different computations (e.g., associated with different areas in the brain).\\n\\nThere are few studies that investigates the sequential learning in non-human primates which you might find interesting [3].\\n\\n[3] Conway, Christopher M., and Morten H. Christiansen. 'Sequential learning in non-human\\nprimates.' Trends in cognitive Sciences 5, no. 12 (2001): 539-546.\"}", "{\"title\": \"review of Knowledge Matters: Importance of Prior Information for Optimization\", \"review\": \"The paper by Gulcehre & Bengio entitled 'Knowledge Matters: Importance of Prior Information for Optimization' presents an empirical study which compares a two-tiered MLP architecture against traditional algorithms including SVM, decision trees and boosting. Images used for this task are 64x64 pixel images containing tetris-like sprite shapes. The proposed task consists in trying to figure out whether all the sprites in the image are from the same category or not (invariant to 2D transformations).\\n\\nThe main result from this study is that intermediate guidance (aka building by hand an architecture which 'exploits intermediate level concepts' by dividing the problem in two stages (a classification stage followed by a XOR stage) solves the problem for which a 'naive' neural net (as well as classical machine learning algorithms) fail.\", \"pros\": \"The proposed task is relatively interesting as it offers an alternative to traditional pattern matching tasks used in computer vision. The experiments seem well conducted. The fact that a neural network and other universal approximators do not seem to even get close to learning the task with ~80K training examples is relatively surprising.\", \"cons\": \"The work by Fleuret et al (Comparing machines and humans on a visual categorization test. PNAS 2011) needs to be discussed. This paper focuses on a single task which appears to be a special case from the longer list of 'reasoning' tasks proposed by Fleuret et al.\\n\\nIn addition, the proposed study reports a null result, which is of course always a little problematic (the fact that the authors did not manage to train a classical NN to solve the problem does not mean it is impossible). At the same time, the authors have explored reasonably well the space of hyper parameters and seem to have done their best in getting the NN to succeed.\", \"minor_points\": \"The structure of the paper is relatively confusing. Sections 1.1 and 2 provide a review of some published work by the authors and does not appear to be needed for understanding the paper. In my view the paper could be shortened or at least most of the opinions/speculations in the introduction should be moved to the discussion section.\"}", "{\"review\": \"I would like to add some further comments for the purpose of constructive discussion.\\n\\nThe authors try to provide further insights into why and when deep learning works, and to broaden the focus of the kind of questions usually asked in this community, in particular by making connections to biological cognition and learning. I think this a good motivation. There are some issues that I would like to address though.\\n\\nAt the core of this work is the result that algorithms can fail at solving the given classification task unless 'intermediate learning cues' are supplied. The authors cover many different algorithms to make this point. However, I think it would have been helpful to provide more empirical or theoretical analysis into *why* these algorithms fail, and what makes the task difficult. In particular, at what point does the complexity come in? Is the difficulty of the task qualitative or quantitative? The task would be qualitatively the same with just three patches and three categories of objects, or perhaps even just three multinomial units as input. I would be curious to see at least an empirical analysis into this question, by varying the complexity of the task, not just the types of algorithms and their parameters.\\n\\nAs for answering the question of what makes the task difficult, the crux appears to be that the task implicitly requires invariant object recognition: to solve the second stage task (are all objects of the same category?), the algorithm essentially has to solve the problem of invariant object recognition first (what makes a category?). As the authors have shown, given the knowledge about object categories, the second stage task becomes easy to solve. It is interesting that the weak supervision signal provided in stage two alone is not enough to guide the algorithm to discover the object categories first, but I'm not sure that it is that surprising.\\n\\nOnce the problem of invariant recognition has been identified, I don't think it is that 'surprising' either that unsupervised learning did not help at all. No matter how much data and how clever the algorithm, there is simply no way for an unsupervised algorithm to discover that a given tetris object and its rotated version are in some sense the same thing. This knowledge is however necessary to solve the subsequent same/different task across categories. An algorithm can only learn invariant object recognition given some additional information, either with explicit supervision or with more structure in the data and some in-built inductive biases (some form of semi-supervised learning).\\n\\nIn this light, it is not clear to me how the work relates to specifically 'cultural learning'. The authors do not model knowledge exchange between agents as such, and it is not clear why the task at hand would be one where cultural learning is particularly relevant. The general issue of what knowledge or inductive biases are needed to learn useful representations, in particular for invariant object recognition, is indeed very interesting, and I think seldom addressed in deep learning beyond building in translation invariance. For the example of invariant object recognition, learning from temporal sequences and building in biases about 'temporal coherence' or 'slowness' (F\\u00f6ldi\\u00e1k 91, Wiskott & Sejnowski 02) have been suggested as solutions. This has indeed been explored in deep learning at least in one case (Mobahi et al, 09), and might be more appropriate to address the task at hand (with sequential images). I think that if the authors believe that cultural learning is an important ingredient to deep learning or an interesting issue on its own, they perhaps need to find a more relevant task and then show that it can solved with a model that really utilizes cultural learning specifically, not just general supervision.\\n\\nLastly, an issue I am confused by: if the second stage task (given the correct intermediate results from the first stage) corresponds to an 'XOR-like' problem, how come a single perceptron in the second stage can solve it?\"}", "{\"reply\": \"> It is surprising that structured MLP does chance even on training set. On the other hand with 11 output units per parch this is perhaps not so surprising as the network has to fit everything into minimal representation. However one would expect to get better training set resuts with larger sizes. You should put such results into Table 1 and go to even larger sizes, like 100.\\n\\nWe conducted experiments with the structured MLP(SMLP) using 11, 50 and 100 hidden units per patch in the final layer of the locally connected part, yielding to chance performance on both the training and the test set. The revision will have a table listing the results we obtained with different number of hidden units.\\n\\n\\n> To continue on this, if you trained sparse coding with high sparsity on each patch you should get 1 in N representation for each instance (with 11x4x3 or more units). It would be good to see what the P2NN would do with such representation. I think this is the primary missing piece of this work.\\n\\nThat's a very nice suggestion and indeed it was already in our list of experiments to investigate. We conducted several experiments by using a one-hot representation for each patch and we put the results on these datasets in the revision.\\n\\n> It is not quite fair to compare to humans as humans have prior knowledge, specifically of rotations, probably learned from seeing objects rotate.\\n\\nHumans are probably doing mental rotation (see [1]) instead of having rotation invariance, which indeed exploits one form of prior knowledge (learned or innate) or another (see [2]). We have modified the statement accordingly. We have also performed an experiment (reported in the revision) in which all the possible bit configurations of a single patch were enumerated (there are 80 of them plus the special case where no object is present). With this input representation, there is no vision prior knowledge that can help a human learner. The task is strangely easier but still difficult: we achieved 25% test error up to now, i.e., better than chance (i.e. 50%) but far from the less than 1% error of the IKGNN. In future work, we will measure how well humans learn that task.\\n\\n\\n> I don't think 'Local descent hypothesis' is quite true. We don't just do local approximate descent. First we do one shot learning in hippocampus. Second, we do search for explanations and solutions and we do planning (both unconsciously and consciously). Sure having more agents helps it's a little like running a genetic algorithm - an algorithm that overcomes local minima.\\n\\nOne-shot learning is not incompatible with local approximate descent. For example, allocating new parameters to an example to learn by heart is moving in the descent direction from the point of view of functional gradient descent. Searching for explanations and planning belong to the realm of inference. We have inference in many graphical models while training itself still proceeds by local approximate descent. And you are right that having multiple agents sharing knowledge is like running a genetic algorithm and helps overcome some of the local minima issues.\\n\\n\\n> At the end of page 6 you say P1NN had 2048 units and P2NN 1024 but this is reversed in 3.2.2. Typo?\\n\\nThanks for pointing to that typo. The numbers in 3.2.2 are correct.\\n\\n[1] K\\u00f6hler, C., Hoffmann, K. P., Dehnhardt, G., & Mauck, B. (2005). Mental Rotation and Rotational\\nInvariance in the Rhesus Monkey<i>(Macaca mulatta)</i>. Brain, Behavior and Evolution, 66(3),\\n158-166.\\n\\n[2] Corballis, Michael C. 'Mental rotation and the right hemisphere.' Brain and Language 57.1\\n(1997): 100-121.\"}", "{\"review\": \"Replies for the reviewers' comments are prepared by the both authors of the paper: Yoshua Bengio and Caglar Gulcehre.\"}", "{\"title\": \"review of Knowledge Matters: Importance of Prior Information for Optimization\", \"review\": \"The paper give an example of a task that neural net solves perfectly when intermediate labels are provided but that is not solved at all by several machine learning algorithms including neural net when the intermediate labels are not provided. I consider the result important.\", \"comments\": \"It is surprising that structured MLP does chance even on training set. On the other hand with 11 output units per parch this is perhaps not so surprising as the network has to fit everything into minimal representation. However one would expect to get better training set resuts with larger sizes. You should put such results into Table 1 and go to even larger sizes, like 100. \\n\\nTo continue on this, if you trained sparse coding with high sparsity on each patch you should get 1 in N representation for each instance (with 11x4x3 or more units). It would be good to see what the P2NN would do with such representation. I think this is the primary missing piece of this work. \\n\\nIt is not quite fair to compare to humans as humans have prior knowledge, specifically of rotations, probably learned from seeing objects rotate.\\n\\nI don't think 'Local descent hypothesis' is quite true. We don't just do local approximate descent. First we do one shot learning in hippocampus. Second, we do search for explanations and solutions and we do planning (both unconsciously and consciously). Sure having more agents helps - it's a little like running a genetic algorithm - an algorithm that overcomes local minima.\\n\\nAt the end of page 6 you say P1NN had 2048 units and P2NN 1024 but this is reversed in 3.2.2. Typo?\"}", "{\"review\": \"We have uploaded the revision of the paper to arxiv. The revision will be announced by Arxiv soon.\"}", "{\"reply\": \"> However, I think it would have been helpful to provide more empirical or theoretical analysis into *why* these algorithms fail, and what makes the task difficult. In particular, at what point does the complexity come in? Is the difficulty of the task qualitative or quantitative? The task would be qualitatively the same with just three patches and three categories of objects, or perhaps even just three multinomial units as input. I would be curious to see at least an empirical analysis into this question, by varying the complexity of the task, not just the types of algorithms and their parameters.\\n\\nWe have done more experiments to explore the effect of the difficulty of the task. In particular, we considered three settings aimed at making the task gradually easier: (1) map each possible patch-level input vector into an integer (one out of 81=(1 for no-object + 10 x 4 x 2) and a corresponding one-hot 80-bit input vector (and feed the concatenation of these 64 vectors as input to a classifier), (2) map each possible patch-level input vector into a disentangled representation with 3 one-hot vectors (10 bits + 4 bits + 2 bits) in which the class can be read directly (and one could imagine as the best possible outcome of unsupervised pre-training), and (3) only retain the actual object categories (with only the first 10 bits per patch, for the 10 classes). We found that (2) and (3) can be learned perfectly while (1) can be partially learned (down to about 30% error with 80k training examples). So it looks like part of the problem (as we had surmised) is to separate class information from the factors, while somehow the image-like encoding is actually harder to learn from (probably an ill-conditioning problem) than the one-hot encoding per patch.\\n\\n> As for answering the question of what makes the task difficult, the crux appears to be that the task implicitly requires invariant object recognition: to solve the second stage task (are all objects of the same category?), the algorithm essentially has to solve the problem of invariant object recognition first (what makes a category?). As the authors have shown, given the knowledge about object categories, the second stage task becomes easy to solve. It is interesting that the weak supervision signal provided in stage two alone is not enough to guide the algorithm to discover the object categories first, but I'm not sure that it is that surprising.\\n\\nVisually much more complex tasks are being rather successfully handled with deep convolutional nets, as in the recent work by Kryzhevski & Hinton at NIPS 2012. It is therefore surprising that such a simplified task would make most learning algorithms fail. We believe it boils down to an optimization issue (the difficulty of training the lower layers well, in spite of correct supervised learning gradients being computed through the upper layers) and our experiments are consistent with that hypothesis.\\n\\nThe experiments described above with disentangled inputs suggest that if unsupervised learning was doing an optimal job, it should be possible to solve the problem.\\n\\n> In this light, it is not clear to me how the work relates to specifically 'cultural learning'. The authors do not model knowledge exchange between agents as such, and it is not clear why the task at hand would be one where cultural learning is particularly relevant. The general issue of what knowledge or inductive biases are needed to learn useful representations, in particular for invariant object recognition, is indeed very interesting, and I think seldom addressed in deep learning beyond building in translation invariance. For the example of invariant object recognition, learning from temporal sequences and building in biases about 'temporal coherence' or 'slowness' (F\\u00f6ldi\\u00e1k 91, Wiskott & Sejnowski 02) have been suggested as solutions. This has indeed been explored in deep learning at least in one case (Mobahi et al, 09), and might be more appropriate to address the task at hand (with sequential images). I think that if the authors believe that cultural learning is an important ingredient to deep learning or an interesting issue on its own, they perhaps need to find a more relevant task and then show that it can solved with a model that really utilizes cultural learning specifically, not just general supervision\\n\\nThe main difficulty of this task stems from the composition of two distinct tasks, the first task is the invariant object recognition and second task is learning the logical relation between the objects in the image. Each task can be solved fairly easily on its own, otherwise IKGNN couldn't learn this task. But we claim that combination of these two tasks raises an optimization difficulty that the machine learning algorithms that we have tried failed to overcome.\\n\\nWe are aware that slow features might be useful for solving this task and we plan to investigate that as well. We also believe that as such, temporal coherence would be a much more plausible explanation as to how humans learn such visual tasks, since humans learn to see quite well with little or no verbal cues from parents or teachers (and of course, all the other animals that have very good vision do not have a culture or one nearly as developed as that of humans). On the other hand, we believe that this kind of two-level abstraction learning problem illustrates a more general training difficulty that humans may face when trying to learn higher level abstractions (precisely of the kind that we need teachers for).\\n\\nUnfortunately there is not yet much work combining cultural learning and deep learning. This paper is meant to lay the motivational grounds for such work, by showing simple examples where we might need cultural learning and where ordinary supervised learning (without intermediate concepts guidance) or even unsupervised pre-training face a very difficult training challenge. The other connection is that these experiments are consistent with aspects of the cultural learning hypotheses laid down in Bengio 2012: if learning more abstract concepts (that require a deeper architecture that captures distinct abstractions, as in our task) is a serious optimization challenge, this challenge could also be an issue for brains, making it all the more important to explain how humans manage to deal with such problems (presumably thanks to the guidance of other humans, e.g., by providing hints about intermediate abstractions).\", \"we_wanted_to_show_that_there_are_problems_that_are_inherently_hard_for_current_machine_learning_algorithms_and_motivate_cultural_learning\": \"distributed and parallelized learning of such higher level concepts might be more efficient for solving this kind of tasks.\\n\\n> Lastly, an issue I am confused by: if the second stage task (given the correct intermediate results from the first stage) corresponds to an 'XOR-like' problem, how come a single perceptron in the second stage can solve it?\\n\\nIn the second stage THERE ARE HIDDEN UNITS. It is not a simple perceptron but a simple MLP. We have used a RELU MLP with 2048 hidden units and a sigmoid output trained with a crossentropy training objective.\"}", "{\"title\": \"review of Knowledge Matters: Importance of Prior Information for Optimization\", \"review\": \"In this paper, the authors provide an exposition of curriculum learning and cultural evolution as solutions to the effective local minimum problem. The authors provide a detailed set of simulations that support a curriculum theory of learning, which rely on a supervisory training signal of intermediate task variables that are relevant for the task.\", \"pros\": \"This work is important to probe the limitations of current algorithms, especially as the deep learning field continues to have success.\\n\\nA great thing about this paper is that it got me thinking about new classes of algorithms that might effectively solve the mid-level optimization and more effective strategies for training deep networks for practical tasks.\\n\\nThe simulations are well described and compelling.\", \"cons\": \"The exposition on curriculum learning could be condensed.\\n(minor) The demonstrative problem (sprite counting) is a visual perception problem and therefore carries with it the biases of our own perception and inferred strategies. Maybe the overall argument might be bolstered by the addition of a more abstract example?\", \"here_are_some_questions\": \"why so many image regions? Why use an 8x8 grid?\\nWon't 3 regions suffice to make the point? Or is this related to the complexity of the problem.\", \"a_related_question\": \"how are the results effected by the number of these regions? Maybe some reduced tests at the extremes would be interesting, i.e. with only 3 regions, and 32 (you have 64 already)?\\n\\nIn the networks that solve the task, are the weights that are learned symmetric over the image regions? i.e. are these weights identical (maybe up to some scaling and sign flip). Is there anything you have determined about the structure of the learned second layer of the IKGNN?\\n\\nFurthermore, what about including a 'weight sharing' constraint in the general MLP model (the one that does not solve the problem, but has the same structure as the one that does)? Would including this constraint change the solution? (the constraint is already in the P1NN, but what about adding it into the P2NN?)\", \"another_way_to_ask_this_is\": \"Is enforcing translation invariance in the network sufficient to achieve good performance, or do we need to specifically train for the sprite discrimination?\", \"a_technical_point_about_the_assumption_of_human_performance_on_this_task\": \"Do we know if humans can solve this problem 'in a glance?': flashing the image for a small amount of time ~100-200msecs. Either with or without a mask?\\nIt seems that the networks you have derived are solving such a problem 'in a glance.'\", \"a_more_meta_comment\": \"Is there an argument to be made that the sequential nature of language allows humans to solve this task?\", \"even_the_way_you_formulate_the_problem_suggests_this_sequential_process\": \"'are all of the sprites in the image the same?': in other words 'find the sprites, then decide if they are the same'\\nWhen I imagine solving this problem myself, I imagine performing a more sequential process: look at one sprite, then the next, (is it the same?, if it is): look at the next sprite (is it the same?).\\nI know that we can consider this problem to be a concrete example of a more abstract learning problem, but it's not clear if humans can solve such problems without sequential processing.\\nAnyway, this is not a criticism, per se, just food for thought.\"}", "{\"reply\": \"> The work by Fleuret et al (Comparing machines and humans on a visual categorization test. PNAS 2011) needs to be discussed. This paper focuses on a single task which appears to be a special case from the longer list of 'reasoning' tasks proposed by Fleuret et al.\\n\\nYes we agree that some of the tasks in the Fleuret et al. paper are similar to our task. We cited this paper in the new revision. Thanks for pointing it out.\\n\\nThe biggest difference between the Fleuret et al paper and our approach is that we purposely did not use any preprocessing, in order to make the task *difficult* and show the limitations of a vast range of learning algorithms. This highlights differences between the goals of those papers, of course.\\n\\n> In addition, the proposed study reports a null result, which is of course always a little problematic (the fact that the authors did not manage to train a classical NN to solve the problem does not mean it is impossible). At the same time, the authors have explored reasonably well the space of hyper parameters and seem to have done their best in getting the NN to succeed.\\n\\nWe agree with that statement. Nonetheless, negative results (especially when they are confirmed by other labs) can have a powerful impact of research, by highlighting the limitations of current algorithms and thus directing research fruitfully towards addressing important challenges. It is unfortunately more difficult to publish negative results in our community, in part because computer scientists do not have as much as other scientists (such as biologists) the culture of replicating experiments and publishing these validations.\\n\\n> Minor points: The structure of the paper is relatively confusing. Sections 1.1 and 2 provide a review of some published work by the authors and does not appear to be needed for understanding the paper. In my view the paper could be shortened or at least most of the opinions/speculations in the introduction should be moved to the discussion section.\\n\\nWe disagree. The main motivation for these experiments was to empirically validate some aspects of the hypotheses discussed in Bengio 2012 on local minima and cultural evolution. If learning more abstract concepts (that require a deeper architecture) is a serious optimization challenge, this challenge could also be an issue for brains, making it all the more important to explain how humans manage to deal with such problems (presumably thanks to the guidance of other humans, e.g., by providing hints about intermediate abstractions).\"}" ] }
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
ICLR.cc/2013/conference
2013
{ "note_id": [ "t-wFtMYSdpR8v", "y-XNy_0Refysb", "u3MkubcB_YIB0", "tWNADGgy0XWy2", "JnShnsXduOpVA", "Ua0HJI2r-Waro", "FPpzPM-IHKPkZ" ], "note_type": [ "comment", "review", "review", "comment", "review", "review", "comment" ], "note_created": [ 1362994020000, 1362214260000, 1362393540000, 1362971700000, 1362367200000, 1362783240000, 1362971820000 ], "note_signatures": [ [ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ], [ "anonymous reviewer 9aa4" ], [ "anonymous reviewer feb2" ], [ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ], [ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ], [ "anonymous reviewer 36a3" ], [ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ] ], "structured_content_str": [ "{\"reply\": \"Please see the comment below (from March 3rd).\\n\\nWe have updated the paper to incorporate your comments.\"}", "{\"title\": \"review of Efficient Learning of Domain-invariant Image Representations\", \"review\": \"This paper focuses on multi-task learning across domains, where both the data generating distribution and the output labels can change between source and target domains. It presents a SVM-based model which jointly learns 1) affine hyperplanes that separate the classes in a common domain consisting of the source and the target projected to the source; and 2) a linear transformation mapping points from the target domain into the source domain.\\n\\nPositive points\\n\\n1) The method is dead simple and seems technically sound. To the best of my knowledge it's novel, but I'm not as familiar with the SVM literature - I am hoping that another reviewer comes from the SVM community and can better assess its novelty.\\n2) The paper is well written and understandable\\n3) The experiments seem thorough: several datasets and tasks are considered, the model is compared to various baselines. The model is shown to outperform contemporary domain adaption methods, generalize to novel test categories at test time (which many other methods cannot do) and can scale to large datasets.\\n\\nNegative points\", \"i_have_one_major_criticism\": \"the paper doesn't seem really focused on representation learning - it's more a paper about a method for multi-task learning across domains which learns a (shallow, linear) mapping from source to target. I agree - it's a representation but there's no real analysis or focus on the representation itself - e.g. what is being captured by the representation. The method is totally valid, but I just get the sense that it's a paper that could fit well with CVPR or ICCV (i.e. a good vision paper) where the title says 'represention learning', and a few sentences highlight the 'representation' that's being learned, however the method nor the paper's focus is really on learning interesting representations. On one hand I question its suitability for ICLR and it's appeal to the community (compared to CVPR/ICCV, etc.) but on the other hand, I think it's great to encourage diversity in the papers/authors at the conference and having a more 'visiony'-feeling paper is not a bad thing.\\n\\nComments\\n--------\\n\\nCan you state up front what is meant by the asymmetry of the transform (e.g. when it's first mentioned)? Later on in the paper it becomes clear that it has to do with the source and target having different feature dimensions but it wasn't obvious to me at the beginning of the paper. \\n\\nJust before Eq (4) and (5) it says that 'we begin by rewriting Eq 1-3 with soft constraints (slack)'. But where are the slack variables in Eq 4?\"}", "{\"title\": \"review of Efficient Learning of Domain-invariant Image Representations\", \"review\": \"This paper proposes to make domain adaptation and multi-task learning easier by jointly learning the task-specific max-margin classifiers and a linear mapping from a new target space to the source space; the loss function encourages the mapped features to lie on the correct side of the hyperplanes learned for each task of the hyperplanes of the max-margin classifiers. Experiments show that the mapping performs as well or better as existing domain adaptation methods, but can scale to larger problems while many earlier approaches are too costly.\\n\\nOverall the paper is clear, well-crafted, and the context and previous work are well presented. The idea is appealing in its simplicity, and works well.\", \"pros\": \"the idea is intuitive and well justified; it is appealing that the method is flexible and can tackle cases where labels are missing for some categories.\\nThe paper is clear and well-written.\\nExperimental results are convincing enough; while the results are not outperforming the state of the art (results are within the standard error of previously published performance), the authors' argument that their method is better suited to cases where domains are more different seems reasonable and backed by their experimental results.\", \"cons\": \"this method would work only in cases where a simple general linear rotation of features would do a good job placing features in a favorable space.\\nThe method also gives a privileged role to the source space, while methods that map features to a common latent space have more symmetry; the authors argue that it is hard to guess the optimal dimension of the latent space -- but their method simply constrains it to the size of the source space, so there is no guarantee that this would be any more optimal.\"}", "{\"reply\": \"Thank you for your review. In this paper we present a method that learns an asymmetric linear mapping between the source and target feature spaces. In general, the feature transformation learning can be kernelized (the optimization framework can be formulated as a standard QP). However, for this work we focus on the linear case because of it's scalability to a large number of data points. We show that using the linear framework we perform as well or better than other methods which learn a non-linear mapping.\\n\\nWe learn a transformation between the target and source points which can be expressed by the matrix W in our paper. In this paper, we use this matrix to compute the dot product in the source domain between theta_k and the transformed target points (Wx^t_i). However, if we think of W (an asymmetric matrix) as begin decomposed as W = A'B, then the dot product function can be interpreted as theta_k'A'Bx^t_i. In other words it could be interpreted as the dot product in some common latent space between source points transformed by A and target points transformed by B. We propose learning the W matrix rather than A,B directly so that we do not have to specify the dimension of the latent space.\"}", "{\"review\": \"Thank you for your feedback. We argue that the task of adapting representations across domains is one that is common to all representation learning challenges, including those based on deep architectures, metric learning methods, and max-margin transform learning. Our insight into this problem is to use the source classifier to inform the representation learned for the target data. Specifically, we jointly learn a source domain classifier and a representation for the target domain, such that the target points can be well classified in the source domain.\\n\\nWe present a specific algorithm using an SVM classifier and testing on visual domains, however the principles of our method are applicable to both a range of methods for learning and classification (beyond SVM) as well as a range of applications (beyond vision).\\n\\nIn addition, thank you for your comments section. We will clarify what is meant by asymmetric transform and modify the wording around equations (4-5) to reflect the math shown, which has soft constraints and no slack variables.\"}", "{\"title\": \"review of Efficient Learning of Domain-invariant Image Representations\", \"review\": \"The paper presents a new method for learning domain invariant image\\nrepresentations. The proposed approach simultaneously learns a linear\\nmapping of the target features into the source domain and the\\nparameters of a multi-class linear SVM classifier. Experimental\\nevaluations show that the proposed approach performs similarly or\\nbetter than previous art. The new algorithm presents computational\\nadvantages with respect to previous approaches.\\n\\nThe paper is well written and clearly presented. It addresses an\\ninteresting problem proposing that has received attention in recent\\nyears. The proposed method is considerably simpler than competitive\\napproaches with similar (or better) performance (in the setting of the\\nreported experiments). The method is not very novel but manages to\\nimprove some drawbacks of previous approaches.\", \"pros\": \"- the proposed framework is fairly simple and the provided\\nimplementation details makes it easy to reproduce\\n- experimental evaluation is presented, comparing the proposed method\\nwith several competing approaches. The amount of empirical evidence\\nseems sufficient to back up the claims.\", \"cons\": \"- Being this method general, I think that it would have been very good\\nto include an example with more distinct source and target feature\\nspaces (e.g. text categorization), or even better different\\nmodalities.\", \"comments\": \"In the work [15], the authors propose a metric that measures the\\nadaptability between a pair of source and target domains. In this\\nsetting if several possible source domains are available, it selects\\nthe best one. How could this be considered in your setting?\\n\\nIn the first experimental setting (standard domain adaptation\\nproblem), I understand that the idea the experiment is to show how the\\nlabeled data in the source domain can help to better classify the data\\nin the target domain. It is not clear to me how the SVM trained with\\ntraining data, SVM_t, of the target domain. Is this done only with the\\nlimited set of labeled data in the target domain? What is the case for\\nthe SVM_s?\\n\\nLooking to the last experimental setting, I suppose that the SVM_s\\n(trained using source training data) also includes the transformed\\ndata from the target domain. Otherwise, I don't understand how the\\nperformance can increase by increasing the number of labeled target\\nexamples.\"}", "{\"reply\": \"Thank you for your feedback. We would like to start by clarifying a few points from your comments section. First, our first experiment (standard domain adaptation setting) SVM_t is the classifier learned from being trained with only the limited available data from the target domain. So, for example when we're looking at the shift between amazon to webcam (a->w) we have a lot of training data from amazon and a very small amount of the webcam dataset. SVM_t for this example would be an SVM trained on just the small amount of data from webcam. Note that in the new category experiment setting it is not possible to train SVM_t because there are some categories that have no labeled examples in the target. Second, for our last experiment, SVM_s does not (and should not) change as the number of points in the target is increased. SVM_s is an SVM classifier trained using only source data. In the figure it is represented by the dotted cyan line, which remains constant (at around 42%) as the number of labeled target examples grows. As a third point, if we did have a metric to determine the adaptability of a (source,target) domain pair then we could simply choose to use the source data which is most adaptable to our target data. However, [15] provides a metric to determine a 'distance' between the source and target subspace, not necessarily an adaptability metric. The two might be correlated depending on the adaptation algorithm you use. Namely, if a (source,target) pair are 'close' you might assume they are easily adaptable. But, with our method we learn a transformation between the two spaces, so it's possible for a (source,target) pair to initially be very different according to the metric from [15], but be very adaptable. For example: in [15] the metric said that Caltech was most similar to Amazon, followed by Webcam, followed by Dslr. However, if you look at Table 1 you see that we received higher accuracy when adapting between dslr->caltech then from webcam->caltech. So even though webcam was initially more similar to caltech than dslr to caltech, we find that dslr is more 'adaptable' to caltech.\\n\\nFinally, the idea of using more definite domains or even different modalities is very interesting to us and is something we are considering for future work. We do feel that the experiments we present do justify our claims that our algorithm performs comparable or better than state of the art techniques and is simultaneously applicable to a larger variety of possible adaptation scenarios.\"}" ] }
OVyHViMbHRm8c
Visual Objects Classification with Sliding Spatial Pyramid Matching
[ "Hao Wooi Lim", "Yong Haur Tay" ]
We present a method for visual object classification using only a single feature, transformed color SIFT with a variant of Spatial Pyramid Matching (SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an ensemble of linear regression (provided by LINEAR) to obtained state of the art result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead of dividing an image into K number of regions, a subwindow of fixed size is slide around the image with a fixed step size. For each subwindow, a histogram of visual words is generated. To obtained the visual vocabulary, instead of performing K-means clustering, we randomly pick N exemplars from the training set and encode them with a soft non-linear mapping method. We then trained 15 models, each with a different visual word size with linear regression. All 15 models are then averaged together to form a single strong model.
[ "visual objects classification", "spatial pyramid", "spatial pyramid matching", "spm", "sspm", "linear regression", "image", "subwindow", "models", "visual object classification" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=OVyHViMbHRm8c
https://openreview.net/forum?id=OVyHViMbHRm8c
ICLR.cc/2013/conference
2013
{ "note_id": [ "-MIjMM8a1GMYx", "mqGM7L9xJ-7Oz" ], "note_type": [ "review", "review" ], "note_created": [ 1362430380000, 1362272400000 ], "note_signatures": [ [ "anonymous reviewer 9ba5" ], [ "anonymous reviewer 9dc6" ] ], "structured_content_str": [ "{\"title\": \"review of Visual Objects Classification with Sliding Spatial Pyramid Matching\", \"review\": \"Summary of contributions:\\n\\nThe paper presented a method to achieve a state-of-the-art accuracy on the object recognition benchmark Caltech101. The method used two major ingredients: 1. a sliding window of histograms (called sliding spatial pyramid matching) , 2. randomized vocabularies to generate different models and combine them. The authors claimed that, using only one image feature (transformed color SIFT), the method achieved really good results on Caltech101.\", \"assessment_of_novelty_and_quality\": \"Though the accuracy looks impressive, the paper offers limited research value to the machine learning community. The success is largely engineering, lacking insights that are informative to readers. \\n\\nThe sliding window representation does not explore multiple scales. Therefore I don't understand why it is still called 'pyramid'. \\n\\nI hope the authors would try the methods on large-scale datasets like ImageNet. If good result obtained, then the work will be of great value to application.\"}", "{\"title\": \"review of Visual Objects Classification with Sliding Spatial Pyramid Matching\", \"review\": \"This paper replaces the spatial pyramidal pooling in a spatial pyramid pooling by a sliding-window style pooling.\\nBy using this method and color SIFT descriptors, state-of-the-art results are obtained on the Caltech-101 dataset (83.5% accuracy).\\n\\nThe contribution in this paper would be rather slight as is, but this is all the more true since it seems the idea of using sliding window pooling has already appeared in an older paper, with good results (they call the sliding windows 'components'):\\n\\n'A Boosting Sparsity Constrained Bi-Linear Model for Object Recognition'\\nIEEE Multimedia 2012\\nChunjie Zhang Jing Liu ; Qi Tian ; Yanjun Han ; Hanqing Lu ; Songde Ma\\n\\n Simply using it with color SIFT descriptors does not constitute enough novelty for accepting this paper.\"}" ] }
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
reject
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
ICLR.cc/2013/conference
2013
{ "note_id": [ "MUE4IYdQ_XMbN", "A6lxA54Jzv1yo", "Ezy1znNS-ZwLb" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1360788360000, 1362021540000, 1361986980000 ], "note_signatures": [ [ "anonymous reviewer 5328" ], [ "anonymous reviewer a273" ], [ "anonymous reviewer 5a64" ] ], "structured_content_str": [ "{\"title\": \"review of Joint Space Neural Probabilistic Language Model for Statistical Machine\\n Translation\", \"review\": \"The paper describes a Bayesian nonparametric HMM augmented with a hierarchical Pitman-Yor language model and slightly extends it by introducing conditioning on auxiliary inputs, possibly at each timestep. The observations are used for incorporating information from a separately trained model, such as LDA. In spite of the title and the abstract the paper has nothing to do with neural language models and very little with representation learning, as the author bizarrely uses the term NPLM to refer to the above n-gram HMM model. The model is evaluated as a part of a machine translation pipeline.\\n\\nThis is a very poorly written paper. The quality of writing makes it at times very difficult to understand what exactly has been done. The paper makes no significant contributions from the machine learning standpoint, as what the author calls the 'n-gram HMM' is not novel, having been introduced by Blunsom & Cohn in 2011. The only material related to representation learning is not new either as it involves running LDA on documents. The rest of the paper is about tweaking a translation pipeline and is far too specialized for ICLR.\", \"reference\": \"Blunsom, Phil, and Trevor Cohn. 'A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction.' Proceedings of the 49th Annual Meeting of the ACL, 2011\"}", "{\"title\": \"review of Joint Space Neural Probabilistic Language Model for Statistical Machine\\n Translation\", \"review\": \"Author proposes 'n-gram HMM language model', which is inconsistent with the name of the paper. Also, the introduction is confusing and misleading.\\n\\nOverall the paper presents weak results. For example in Section 4 - the perplexity results are insignificantly better than from n-gram models, and most importantly are not reproducible: it is not even mentioned what is the amount of the training data, what is the order of n-gram models etc.\\n\\nAuthor uses irsltm, altough Srilm is cited too (giving it credit for n-gram language modeling, for some unknown reason); overall, many citations are unjustified, and unrelated to the paper itself (probably the only reason is to make everyone happy).\\n\\n0.2 bleu improvement is generally supposed to be insignificant.\\n\\nI don't see any useful information in the paper that can help others to improve their work (rather opposite). Unless author can obtain better results (which I honestly believe is not possible, with the explored approach), I don't see a reason why this work should be published.\"}", "{\"title\": \"review of Joint Space Neural Probabilistic Language Model for Statistical Machine\\n Translation\", \"review\": \"To quote the authors, this paper introduces a n-gram-HMM language\\nmodel as neural probabilistic language model (NPLM) using the\\nnon-parametric Bayesian construction. This article is really confused\\nand describes a messy mix of different approaches. At the end, it is\\nvery hard to understand what the author wanted to do and what he has\\ndone. This paper can be improved in many ways before it could be\", \"published\": \"authors must clarify their motivations; the interaction\\nbetween neural and HMM models could be describe more precisely;\", \"in_the_reading_order\": \"In the introduction, authors mistake the MERT process with the\\ntranslation process. While MERT is used to tune the weights of a\\nlog-linear combination of models in order to optimize BLEU for\\ninstance, the NPLM are used as an additionnal model to re-rank nbest\\nlists. Moreover, the correct citation for MERT is the ACL 2003 paper\\nof F. Och.\\n\\nIn section 2, the authors introduce a HMM language model. A lot of\", \"questions_remain\": \"What does the hidden states intend to capture ?\\nWhat is the motivation ? What is the joint distribution associated to\\nthe graphical model of figure 1 ? How the word n-gram distributions\\nare related to the hidden states ?\\n\\nIn section 3, the authors enhance their HMM LM with an additional row\\nof hidden states (joint space HMM). At this point the overall goal of\\nthe paper is for me totally unclear. \\n\\nFor the following experimental sections, a lot of information on the\\nset put is missing. Experiments cannot be reproduced given based on\\nthe content of that paper. For example, the intrinsic evaluation\\nintroduces ngram-HMM with one or two features. The very confused\\nexplanation of these features is provided further in the\\narticle. Authors do not describe the data-set (there are a lot of\\neuroparl version), nor the order of the LMs under consideration.\\n\\nIn section 5, the following sentence puzzled me: 'Note that although\\nthis experiment was done using the ngram-HMM language model, any NPLM\\nmay be sufficient for this purpose. In this sense, we use the term\\nNPLM instead of ngram-HMM language model.' Moreover, the first feature\\nis derived from a NPLM, but how this NPLM is learnt, on which dataset,\\nwhat are the parameters, the order of the model and how this feature\\nis derived. I could not find the answers in this article. The rest of\\nthe paper is more and more unclear. At the end, authors shows a BLEU\\nimprovement of 0.2 on a system combination task. While I don't\\nunderstand the models used, the gain is really small and I wonder if\\nit is significant. For comparison's sake, MBR decoding usually provide\\na BLEU improvement of at least 0.2.\"}" ] }
6ZY7ZnIK7kZKy
An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance
[ "Momiao Xiong", "Long Ma" ]
Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset
[ "genetic variants", "variants", "genomic", "information", "clinical significance", "clinical significance fast", "cheaper next generation", "technologies", "massive" ]
reject
https://openreview.net/pdf?id=6ZY7ZnIK7kZKy
https://openreview.net/forum?id=6ZY7ZnIK7kZKy
ICLR.cc/2013/conference
2013
{ "note_id": [ "x8ctSDlKbu8KB", "rLP-LGuBmzyRt" ], "note_type": [ "review", "review" ], "note_created": [ 1362277800000, 1362195240000 ], "note_signatures": [ [ "anonymous reviewer 1ff5" ], [ "anonymous reviewer 34e0" ] ], "structured_content_str": [ "{\"title\": \"review of An Efficient Sufficient Dimension Reduction Method for Identifying\\n Genetic Variants of Clinical Significance\", \"review\": \"Summary of the paper:\\n\\nThis paper proposes a sparse extension of sufficient dimension reduction (the problem of findind a linear subspace so that the output and the input are conditionally independent given the projection of the input onto that subspace). \\n\\nThe sparse extension is formulated through the eigenvalue formulation of sliced inverse regression.\\n\\nThe method is finally applied to identifying genetic variants of clinical significance.\", \"comments\": \"-Other sparse formulations of SIR have been proposed and the new method should be compared to it (see two below)\\n\\nLexin Li, Christopher J. Nachtsheim, Sparse Sliced Inverse Regression, Technometrics. Volume 48, Issue 4, 2006\\n\\nLexin Li. Sparse sufficient dimension reduction. Biometrika (2007) 94(3): 603-613 \\n\\n-In experiments, it would have been nice to see another method run on these data.\\n\\n-The paper appears out of scope for the conference\"}", "{\"title\": \"review of An Efficient Sufficient Dimension Reduction Method for Identifying\\n Genetic Variants of Clinical Significance\", \"review\": \"The paper describes the application of a supervised projection method (Sufficient Dimension Reduction - SDR) for a regression problem in bioinformatics. SDR attempts to find a linear projection space such that the response variable depends on the linear projection of the inputs. The authors make a brief presentation of SDR and formulate it as an optimal scoring problem. It takes the form of a constrained optimization problem which can be solved using an alternating minimization procedure. This method is then applied to prediction problems in Bioinformatics.\\nThe form and organization of the paper are not adequate. The projection method is only briefly outlined. Notations are not correct e.g. the same notations are used for random variables and data matrices, some of the notations or abbreviations are not introduced. The description of the applications remains extremely unclear. The abstract and the contribution do not correspond. The format of the paper is not NIPS format.\\nThe proposed method is an adaptation of existing work. The formulation of SDR as a constrained problem is not new. The contribution here might be a variant of the alternating minimization technique used for this problem. The application is only briefly sketched and cannot be really appreciated from this description.\\nPro\\nDescribes an application of SDR, better known in the statistical community, which is an alternative to other matrix factorization techniques used in machine learning. \\nCons\\nForm and organization of the paper\\nWeak technical contribution - algorithmic and applicative\"}" ] }
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
conferenceOral-iclr2013-workshop
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
ICLR.cc/2013/conference
2013
{ "note_id": [ "BAmMaGEF72a0w", "WZaI2aHNOvDz7", "cBZ06aJhuH6Nw", "s3Zn3ZANM4Twv" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1362389460000, 1362389640000, 1362389580000, 1362170460000 ], "note_signatures": [ [ "anonymous reviewer 95fc" ], [ "anonymous reviewer 95fc" ], [ "anonymous reviewer 95fc" ], [ "anonymous reviewer 7555" ] ], "structured_content_str": [ "{\"title\": \"review of A Nested HDP for Hierarchical Topic Models\", \"review\": \"This paper presents a novel variant of the NCRP process that overcomes the latter's main limitation, namely, that a document necessarily has to use topics from a specific path in the tree. This is accomplished by combining ideas from HDP with the NCRP process, where the entire nCRP tree is replicated for each document where a sample from each DP at each node of the original tree is used as a shared base distribution for each document's own DP.\\n\\nThe idea is novel and is an important contribution in the area of unsupervised large scale text modeling.\\n\\nAlthough the paper is strong on novelty, it seems to be incomplete in terms of presenting any evidence that the model actually works and is better than the original NCRP model. Does it learn better topics than nCRP? Is the new model a better predictor of text? Does it produce a better hierararchy of topics than the original model? Does the better representation of documents translate into better performance on any extrinsic task? Without any preliminary answers to these questions, in my mind, the work is incomplete at best.\"}", "{\"review\": \"no additional comments.\"}", "{\"review\": \"no additional comments.\"}", "{\"title\": \"review of A Nested HDP for Hierarchical Topic Models\", \"review\": \"The paper introduces a natural extension to the nested Chinese Restaurant process, where the main limitation was that a single path for the tree (from the root to a leaf) is chosen for each individual document. In this work, a document specific tree is drawn (with associated switching probabilities) which is then used to generate words in the document. Consequently, the words can represent very different topics not necessarily associated with the same path in the tree.\\n\\nThough the work is clearly interesting and important for the topic modeling community, the workshop paper could potentially be improved. The main problem is clearly the length of the submission which does not provide any kind of details (less than 2 pages of content). Though additional information can be found in the cited arxiv paper, I think it would make sense to include in the workshop paper at least the comparison in terms of perplexity (showing that it substantially outperforms nCRP) and maybe some details on efficiency of inference. \\nConversely, the page-long Figure 2 could be reduced or removed to fit the content.\\n\\nOverall, the work is quite interesting and seems to be a perfect fit for the conference. Given that an extended version is publicly available, I do not think that the above comments are really important.\", \"pros\": \"-- a natural extension of the previous model which achieves respectable results on standard benchmarks (though results are not included in the submissions)\", \"cons\": \"-- a little more information about the model and its performance could be included even in a 3-page workshop paper.\"}" ] }
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
ICLR.cc/2013/conference
2013
{ "note_id": [ "NFxrNAiI-clI8", "ySpzfXa4-ryCM", "eMmX26-PXaMJN", "WWycbHg8XRWuv" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1361169180000, 1362161880000, 1362128940000, 1362989220000 ], "note_signatures": [ [ "anonymous reviewer 1860" ], [ "anonymous reviewer 778f" ], [ "anonymous reviewer cf74" ], [ "Mike Seltzer" ] ], "structured_content_str": [ "{\"title\": \"review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition\\n Tasks\", \"review\": \"This paper is by the group that did the first large-scale speech recognition experiments on deep neural nets, and popularized the technique. It contains various analysis and experiments relating to this setup.\\n Ultimately I was not really sure what was the main point of the paper. There is some analysis of whether the network amplifies or reduces differences in inputs as we go through the layers; there are some experiments relating to features normalization techniques (such as VTLN) and how they interact with neural nets, and there were some experiments showing that the neural network does not do very well on narrowband data unless it has been trained on narrowband data in addition to wideband data; and also showing (by looking at the intermediate activations) that the network learns to be invariant to wideband/narrowband differences, if it is trained on both kinds of input.\\n Although the paper itself is kind of scattered, and I'm not really sure that it makes any major contributions, I would suggest the conference organizers to strongly consider accepting it, because unlike (I imagine) many of the other papers, it comes from a group who are applying these techniques to real world problems and is having considerable success. I think their perspective would be valuable, and accepting it would send the message that this conference values serious, real-world applications, which I think would be a good thing.\\n\\n--\\nBelow are some suggestions for minor fixes to the paper.\\n\\neq. 4, prime ( ') missing after sigma on top right.\\n\\nsec. 3.2, you do not explain the difference between average norm and maximum norm.\\nWhat type of matrix norm do you mean, and what are the average and maximum taken over?\\n\\nafter 'narrowband input feature pairs', one of your subscripts needs to be changed.\"}", "{\"title\": \"review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition\\n Tasks\", \"review\": \"* Comments\\n** Summary\\n The paper uses examples from speech recognition to make the\", \"following_points_about_feature_learning_in_deep_neural_networks\": \"1. Speech recognition performance improves with deeper networks,\\n but the gain per layer diminishes.\\n 2. The internal representations in a trained deep network become\\n increasingly insensitive to small perturbations in the input\\n with depth.\\n 3. Deep networks are unable to extrapolate to test samples that are\\n substantially different from the training samples.\\n\\n The paper then shows that deep neural networks are able to learn\\n representations that are comparatively invariant to two important\", \"sources_of_variability_in_speech\": \"speaker variability and\\n environmental distortions.\\n\\n** Pluses\\n - The work here is an important contribution because it comes from\\n the application of deep learning to real-world problems in speech\\n recognition, and it compares deep learning to classical\\n state-of-the-art approaches including discriminatively trained\\n GMM-HMM models, vocal tract length normalization, feature-space\\n maximum likelihood linear regression, noise-adaptive training,\\n and vector Taylor series compensation.\\n - In the machine learning community, the deep learning literature\\n has been dominated by computer vision applications. It is good\\n to show applications in other domains that have different\\n characteristics. For example, speech recognition is inherently a\\n structured classification problem, while many vision applications\\n are simple classification problems.\\n\\n** Minuses\\n - There is not a lot of new material here. Most of the results\\n have been published elsewhere.\\n\\n** Recommendation\\n I'd like to see this paper accepted because\\n 1. it makes important points about both the advantages and\\n limitations of current approaches to deep learning, illustrating\\n them with practical examples from speech recognition and\\n comparing deep learning against solid baselines; and\\n 2. it brings speech recognition into the broader conversation on\\n deep learning.\\n\\n* Minor Issues\\n - The first (unnumbered) equation is correct; however, I don't\\n think that viewing the internal layers as computing posterior\\n probabilities over hidden binary vectors provides any useful\\n insights.\\n - There is an error in the right hand side of the unnumbered\", \"equation_preceding_equation_4\": \"it should be sigma prime (the\\n derivative), not sigma.\\n - 'Senones' is jargon that is very specific to speech recognition\\n and may not be understood by a broader machine learning audience.\\n - The VTS acronym for vector Taylor series compensation is never\\n defined in the paper.\\n\\n* Proofreading\\n the performance of the ASR systems -> the performance of ASR systems\\n\\n By using the context-dependent deep neural network -> By using context-dependent deep neural network\\n\\n the feature learning interpretations of DNNs -> the feature learning interpretation of DNNs\\n\\n a DNN can interpreted as -> a DNN can be interpreted as\\n\\n whose senone alignment label was generated -> whose HMM state alignment labels were generated\\n\\n the deep models consistently outperforms the shallow -> the deep models consistently outperform the shallow\\n\\n This is reflected in right column -> This is reflected in the right column\\n\\n 3.2 DNN learns more invariant features -> 3.2 DNNs learn more invariant features\\n\\n is that DNN learns more invariant -> is that DNNs learn more invariant\\n\\n since the differences needs to be -> since the differences need to be\\n\\n that the small perturbations in the input -> that small perturbations in the input\\n\\n with the central frequency of the first higher filter bank at 4 kHz -> with the center frequency of the first filter in the higher filter bank at 4 kHz\\n\\n between p_y|x(s_j|x_wb) and p_y|x(s_j|x_nb -> between p_y|x(s_j|x_wb) and p_y|x(s_j|x_nb)\\n\\n Note that the transform is applied before augmenting neighbor frames. -> Note that the transform is applied to individual frames, prior to concatentation.\\n\\n demonstrated through a speech recognition experiments -> demonstrated through speech recognition experiments\"}", "{\"title\": \"review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition\\n Tasks\", \"review\": \"The paper presents an analysis of performance of DNN acoustic models in tasks where there is a mis-match between training and test data. Most of the results do not seem to be novel, and were published in several papers already. The paper is well written and mostly easy to follow.\", \"pros\": \"Although there is nothing surprising in the paper, the study may motivate others to investigate DNNs.\", \"cons\": \"Authors could have been more bold in ideas and experiments.\", \"comments\": \"\", \"table_1\": \"it would be more convincing to show L x N for variable L and N, such as N=4096, if one wants to prove that many (9) hidden layers are needed to achieve top performance (I'd expect that accuracy saturation would occur with less hidden layers, if N would increase); moreover, one can investigate architectures that would have the same number of parameters, but would be more shallow - for example, first and last hidden layers can have N=2048, and the hidden layer in between can have N=8192 - this would be more fair to show if one wants to claim that 9 hidden layers are better than 3 (as obviously, adding more parameters helps and the current comparison with 1-hidden layer NN is completely unfair as input and output layers have different dimensionality, but one can apply other tricks there to reduce complexity - for example hierarchical softmax in the output layer etc.)\\n\\n'Note that the magnitude of the majority of the weights is typically very small' - note that this is also related to sizes of the hidden layers; if hidden layers were very small, the weights would be larger (output of neuron is non-linear function of weighted sum of inputs; if there are 2048 inputs that are in range (0,1), then we can naturally expect the weights to be very small)\\n\\nSection 3 rather shows that neural networks are good at representing smooth functions, which is the opposite to what deep architectures were proposed for. Another reason to believe that 9 hidden layers are not needed.\\n\\nThe results where DNN models perform poorly on data that were not seen during training are not really striking or novel; it would be actually good if authors would try to overcome this problem in a novel way. For example, one can try to make DNNs more robust by allowing some kind of simple cheap adaptation during test time. When it comes to capturing VTLN / speaker characteristics, it would be interesting to use longer-context information, either through recurrence, or by using features derived from long contexts (such as previous 2-10 seconds).\", \"table_4_compares_relative_reductions_of_wer\": \"however, note that 0% is not reachable on Switchboard. If we would assume that human performance is around 5-10% WER, then the difference in relative improvements would be significantly smaller. Also, it is very common that the better the baseline is, the harder it is to gain improvements (as many different techniques actually address the same problems).\\n\\nAlso, it is possible that DNNs can learn some weak VTLN, as they typically see longer context information; it would be interesting to see an experiment where DNN would be trained with limited context information (I would expect WER to increase, but also the relative gain from VTLN should increase).\"}", "{\"review\": \"We\\u2019d like to thank the reviewers for their comments.\\n\\nWe have uploaded a revised version of the paper which we believe addresses reviewers\\u2019 concerns as well as the grammatical issues and typos. \\n \\nWe have revised the abstract and introduction to better establish the purpose of the paper. Our goal is to demonstrate that deep neural networks can learn internal representations that are robust to variability in the input, and that this robustness is maintained when large amounts of training data are used. Much work in DNNs has been on smaller data sets and historically, in speech recognition, large improvements observed on small systems usually do not translate when applied to large-scale state-of-the-art systems. \\n \\nIn addition, the paper contrasts DNN-based systems and their \\u201cbuilt in\\u201d invariance to a wide variety of variability, to GMM-based systems, where algorithms have been designed to combat unwanted variability in a source-specific manner, i.e. they are designed to address a particular mismatch, such as the speaker or the environment.\", \"we_also_believe_there_is_also_a_practical_implication_of_these_results\": \"algorithms for addressing this acoustic mismatch in speaker, environment, or other factors, which are standard and essential for GMM-based recognizers, become far less critical and potentially unnecessary for DNN-based recognizers. We think this is important for both setting future research directions and deploying large-scale systems.\\n \\nFinally, while some of the results have been published previously, we believe the inherent robustness of DNNs to such diverse sources of variability is quite interesting, and is a point that might allude readers unless these results are combined and presented together. We also want to point out that the analysis of sensitivity to the input perturbation and all of the results in Section 6 on environmental robustness are new and previously unpublished. \\n\\nWe hope by putting together all these analyses and results in one paper we can provide some insights on the strengths and weaknesses of using a DNN for speech recognition when trained with real world data.\"}" ] }
rtGYtZ-ZKSMzk
Tree structured sparse coding on cubes
[ "Arthur Szlam" ]
A brief description of tree structured sparse coding on the binary cube.
[ "cubes", "sparse coding", "sparse", "brief description", "tree", "binary cube" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=rtGYtZ-ZKSMzk
https://openreview.net/forum?id=rtGYtZ-ZKSMzk
ICLR.cc/2013/conference
2013
{ "note_id": [ "7ESq7YWfqMhHk", "axSGN5lBGINJm" ], "note_type": [ "review", "review" ], "note_created": [ 1362001920000, 1362831180000 ], "note_signatures": [ [ "anonymous reviewer 2f02" ], [ "anonymous reviewer fd41" ] ], "structured_content_str": [ "{\"title\": \"review of Tree structured sparse coding on cubes\", \"review\": \"I must say I found the abstract very hard to read and would have preferred a longer version\\nto better understand how the model is different from prior work. It's not clear for instance\\nhow the proposed approach compares to other denoising methods. It's not clear neither what is\\nthe relation between tree-based decomposition and noise in MNIST. Finally, I didn't understand\\nwhy the model was restricted to binary representations. All this simply says I failed to\\ncapture the essence of the proposed approach.\"}", "{\"title\": \"review of Tree structured sparse coding on cubes\", \"review\": \"The paper extends the widely known idea of tree-structured sparse coding to the Hamming space. Instead for each node being represented by the best linear fit of the corresponding sub-space, it is represented by the best sub-cube. The idea is valid if not extremely original.\\n\\nI\\u2019m not sure it has too many applications, though. I think it is more frequent to encounter raw data residing some Euclidean space, while using the Hamming space for representation (e.g., as in various similarity-preserving hashing techniques). Hence, I believe a more interesting setting would be to have W in R^d, while keeping Z in H^K, i.e., the dictionary atoms are real vectors producing best linear fit of corresponding clusters with binary activation coefficients. This will lead to the construction of a hash function. The out-of-sample extension would happen naturally through representation pursuit (which will now be performed over the cube).\", \"pros\": \"1.\\tA very simple and easy to implement idea extending tree dictionaries to binary data\\n2.\\tFor binary data, it seems to outperform other algorithms in the presented recovery experiment.\", \"cons\": \"1.\\tThe paper reads more like a preliminary writeup rather than a real paper. The length might be proportional to its contribution, but fixing typos and putting a conclusion section wouldn\\u2019t harm.\\n2.\\tThe experimental result is convincing, but it\\u2019s rather andecdotal. I might miss something, but the author should argue convincingly that representing binary data with sparse tree-structured dictionary is interesting at all, showing a few real applications. The presented experiment on binarized MNIST digit is very artificial.\"}" ] }
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
conferencePoster-iclr2013-workshop
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
ICLR.cc/2013/conference
2013
{ "note_id": [ "HH0nm6IT6SHZc", "llHR9RITMyCTz", "Hu7OueWCO4ur9", "J5RZOWF9WLSi0", "rH1Wu2q8W0ujI", "3wmH3H7ucKwu0" ], "note_type": [ "comment", "review", "review", "comment", "comment", "review" ], "note_created": [ 1363042920000, 1362057120000, 1361947080000, 1363043100000, 1363042680000, 1361968260000 ], "note_signatures": [ [ "Christian Osendorfer" ], [ "anonymous reviewer e954" ], [ "anonymous reviewer f716" ], [ "Christian Osendorfer" ], [ "Christian Osendorfer" ], [ "anonymous reviewer 3338" ] ], "structured_content_str": [ "{\"reply\": \"Dear 3338,\\nThank you for your feedback. In order to give a comprehensive\\nanswer, we quote sentences from your feedback and try to respond\\nappropriately.\\n\\n>>> It is not clear what the purpose of the paper is.\\nWe suggest that the way unsupervised feature learning\", \"methods_are_evaluated_should_be_extended\": \"A more direct evalution\\nof the learnt representations without subsequent supervised algorithms,\\nand not tied to the task of high-level object classification.\\n\\n>>> The ground truth correspondences of the dataset were found by \\n>>> clustering the image patches to find correspondences.\\nThis is not how the description of [R1] with respect to the\\nGround Truth Data (section II in [R1]) reads.\\n\\n>>> In this paper, simple clustering methods were not \\n>>> compared to such as kmeans ...\\nWe added a K-Means experiment to the new version of the paper.\\nWe run K-Means (with a soft threshold function) [R2] on the dataset,\\nit performs worse than spGRBM. (This is mentioned in the new version\\n3 of the paper).\\n\\n>>> Additionally, training in a supervised way makes much more sense\\n>>> for finding correspondences.\\nThis is not the question that we are asking. We deliberately \\navoid any supervised training because we want to investigate\\npurely unsupervised methods. We are not trying to achieve any \\nstate-of-the-art results.\\n\\n>>> It is not clear from the paper alone what is considered at match \\n>>> between descriptors\\nWe have added some text that describes how a false positive\\nrate for a fixed true positive rate is computed.\\n\\n>>> The preprocessing of the image patches seems different for each \\n>>> method. This could lead to wildly different scales of the input \\n>>> pixels and thus the corresponding representations of the various \\n>>> methods.\\nCould you elaborate why this is something to consider \\nin our setting?\\n\\n>>> In section 3.3 it is mentioned that it is surprising that L1 \\n>>> normalization works better because sparsity hurts classification \\n>>> typically.\\nWe don't say that 'sparsity hurts classification typically'. We say\\nthe exact opposite (that sparse representations are beneficial\\nfor classification) and give a reference to [R3], a paper that you\\nalso reference. We say that it is surprising that a sparse representation\\n('sparse' as produced by spGRBM, not by a normalization scheme) \\nperforms better in a distance calculation, because the general \\nunderstanding is (to our knowledge) that sparse representations \\nsuffer more from the curse of dimensionality when considering \\ndistances.\\n\\n>>> However, the sparsity in the paper is directly before the distance \\n>>> calculation, and not before being fed as input to a classifier which \\n>>> is a different setup and would thus be expected to behave differently \\n>>> with sparsity. This is the typical setup in which sparsity is found to \\n>>> hurt classification performance because information is being thrown \\n>>> away before the classifier is used.\\nWe don't understand what is meant here. Wasn't the gist of [R3] that\\na sparse encoding is key for good classification results? However, we\\nthink that the main point that we wanted to convey in the referred part\\nof the paper was poorly presented. We tried\\nto make the presentation of the analysis part better in the new version\\n(arxiv version 3) of the paper.\\n\\n>>> ...does not appear to apply to a wide audience as other papers have \\n>>> done a comparison of unsupervised methods in the past'\\nThose comparisions are, as explained in the paper, done always in combination\\nwith a subsequent supervised classification algorithm on a high-level\\nobject classification task. We want to avoid exactly this setting. We think\\nthat the paper is relevant for researchers working on\\nunsupervised (feature) learning methods and for researchers working in \\nComputer Vision.\\n\\nA new version (arxiv version 3) of the paper is uploaded on March 11.\\n\\n[R1] M. Brown, G. Hua, and S. Winder. Discriminative learning of local image descriptors.\\n[R2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning.\\n[R3] A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quantization.\"}", "{\"title\": \"review of Unsupervised Feature Learning for low-level Local Image Descriptors\", \"review\": \"This paper proposes to evaluate feature learning algorithms by using a low-level vision task, namely image patch matching. The authors compare three feature learning algorithms, GRBM. spGRBM and mcRBM against engineered features like SIFT and others.\\nThe empirical results unfortunately show that the learned features are not very competitive for this task. \\n\\nOverall, the paper does not propose any new algorithm and does not improve performance on any task.\\nIt does raise an interesting question though which is how to assess feature learning algorithms. This is a core problem in the field and its solution could help a) assessing which feature learning methods are better and b) designing algorithms that produce better features (because we would have better loss functions to train them). Unfortunately, this work is too preliminary to advance our understanding towards the solution of this problem (see below for more detailed comments).\", \"overall_quality_is_fairly_poor\": \"there are missing references, there are incorrect claims, the empirical validation is insufficient.\\n\\nPros\\n-- The motivation is very good. We need to improve the way we compare feature learning methods.\\n-- The filters visualization is nice.\\n\\nCons\\n-- It is debatable whether the chosen task is any better for assessing the quality of feature learning methods. The paper almost suggested a better solution in the introduction: we should compare across several tasks (from low level vision like matching to high level vision like object classification). If a representation is better across several tasks, then it must capture many relevant properties of the input.\\nIn other words, it is always possible to tweak a learning algorithm to give good results on one dataset, but it is much more interesting to see it working well across several different tasks after training on generic natural images, for instance. \\n-- The choice of the feature learning methods is questionable, why are only generative models considered here? The authors do mention that other methods were tried and worked worse, however it is hard to believe that more discriminative approaches work worse on the chosen task. In particular, knowing the matching task it seems that a method that trains using a ranking loss (learning nearby features for similar patches and far away features for distant inputs) should work better. See:\\nH. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video. ICML 2009.\\n-- The overall results are pretty disappointing. Feature learning methods do not outperform the best engineered features. They do not outperform even if the comparison is unfair: for instance the authors use 128 dimensional SIFT but much larger dimensionality for the learned features. Besides, the authors do not take into account time, neither the training time nor the time to extract these features. This would also be considered in the evaluation.\", \"more_detailed_comments\": \"-- Missing references.\\nIt is not true that feature learning methods have never been assessed quantitatively without supervised fine tuning. On a low level vision task, I would refer to:\\nLearning to Align from Scratch\\nGary Huang, Marwan Mattar, Honglak Lee, Erik Learned-Miller.\\nIn Advances in Neural Information Processing Systems (NIPS) 25, 2012.\\nAnother missing reference is \\n2011 Memisevic, R.\\nGradient-based learning of higher-order image features. \\nInternational Conference on Computer Vision (ICCV 2011).\\nand other similar papers where Memisevic trains features that relate pairs of image patches.\\n--ROC curves should be reported at least in appendix, if not in the main text.\\n-- I do not understand why SIFT results on tab 1 a) differs from those in tab. 1 b).\"}", "{\"title\": \"review of Unsupervised Feature Learning for low-level Local Image Descriptors\", \"review\": \"his paper proposes a dataset to benchmark the correspodence problem\\nin computer vision. The dataset consists of image patches that have\\ngroundtruth matching pairs (using separate algorithms). Extensive\\nexperiments show that RBMs perform well compared to hand-crafted\\nfeatures.\\n\\nI like the idea of using itermediate evaluation metrics to measure the\\nprogress of unsupervised feature learning and deep learning. That\\nsaid, comparing the methods on noisy groundtruth (results of other\\nalgorithms) may have some bias.\\n\\nThe experiments could be made stronger if algorithms such as\\nAutoencoders or Kmeans (Coates et al, 2011, An Analysis of\\nSingle-Layer Networks in Unsupervised Feature Learning) are\\nconsidered.\\n\\nIf we can consider the groundtruth as clean, will supervised learning\\na deep (convolutional) network using the groundtruth produce better\\nresults?\"}", "{\"reply\": \"Deare954,\\nthank you for your detailed feedback.\\n\\nWe don't argue that the chosen task should replace existing \\nbenchmarks. Instead, we think that it supplements these, because \\nit covers aspects of unsupervised feature learning that have been \\nignored so far. Note that by avoiding any subsequent supervision \\nwe not only think of supervised fine tuning of the learnt architecture \\nbut rather no supervised learning on the representations at all \\n(e.g. like it is still done in [R2]). This is hopefully clearer in \\nversion 3 of the paper, we removed words like 'refinement' \\nand 'fine tuning'.\\n\\nThank you for pointing out missing references [R1, R2, R3]. We \\nadded [R2, R4, R5] to the paper in order to avoid the impression \\nwe are not aware of these approaches (we think that R4 fits better \\nthan R1 and R5 better than R3). We were, but did not mention \\nthese approaches because they are (i) relying on a supervised signal \\nand/or (ii) are concerned with high-level correspondences (we consider\\nfaces as high-level entities). Current work investigates some of these \\nmethods, because utilizing the available pairing information should \\nbe beneficial with respect to a good overall performance. We are not \\narguing that discriminative methods work worse on this dataset. \\nHowever, in this paper we are not striving to achieve state-of-the-art\", \"results\": \"We investigate a new benchmark for unsupervised learning and\\ntest how good existing unsupervised methods do. We tried to make the \\nanalysis part in version 3 of the paper more clearer.\\n\\nWe don't think that our claims are incorrect: We manage to perform \\ncomparable to SIFT when the size of the representation is free. It is \\nnot clear if for standard distance computations a bigger representations \\n(in particular a sparse one) is actually an advantage. We also manage \\nto perform better than several well known compact descriptors when we \\nbinarize the learnt representations.\\n\\nWe also don't think that the evaluation is insufficient. The time to \\nextract the features will be clearly dominated by the SIFT keypoint \\ndetector, because computing a new representation given a patch is a \\nsequence of matrix operations. Training times are added to the new\\nversion of the paper. ROC curves will be in a larger technical \\nreport that describes in more detail the performance of a bigger \\nnumber of feature learning algorithms (both supervised and unsupervised)\\non this dataset.\\n\\nThank you for pointing out a missing experiment, training on general \\nnatural image patches (not extracted around keypoints) and then \\nevaluating on the dataset. We are trying to incorporate results for \\nthis experiment in the final version of the paper. It should also be \\nvery interesting to experiment with the idea of unsupervised\\nalignment [R2], especially as every patch implicitly has already \\nsome general alignment information from its keypoint.\\n\\nIn Table 1b, SIFT is not normalized and used as a 128 byte descriptor \\n(in Table 1a a 128 double descriptor (with normalized entries) is used).\\n\\nA new version (arxiv version 3) of the paper is uploaded on March 11.\\n\\n[R1] H. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video.\\n[R2] Gary Huang, Marwan Mattar, Honglak Lee, Erik Learned-Miller. Learning to Align from Scratch.\\n[R3] Memisevic, R. Gradient-based learning of higher-order image features.\\n[R4] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification.\\n[R5] J. Susskind, R. Memisevic, G. Hinton, and M. Pollefeys. Modeling the joint density of two images under a variety of transformations.\"}", "{\"reply\": \"Dear f716,\\nthank you for your feedback. We evaluated more models\\nthan shown in Table 1, but they perform not as good as\\nspGRBM so we decided to leave those out (from the Table) in order to\\navoid clutter. The models are mentioned in section 3.5\\nof the paper (the arxiv version 2 of the paper).\\n\\nWe are currently running experiments with deep convolutional\\nnetworks to determine how much improvement supervision \\nsignals can achieve.\\n\\nWe uploaded a new version (on March 11) that changes some\\nbits of the presentation. We also evaluated K-Means on the\\ndataset (it is mentioned under 'Other models', because its\\nperformance is below the one frome spGRBM).\"}", "{\"title\": \"review of Unsupervised Feature Learning for low-level Local Image Descriptors\", \"review\": \"This paper is a survey of unsupervised learning techniques applied to the unsupervised task of descriptor matching. Various methods such as Gaussian RBMs, sparse RBMs, and mcRBMs were applied to image patches and the resulting feature vectors were used in a matching task. These methods were compared to standard hand-crafted descriptors such as SIFT, SURF, etc.\\n\\nPros\\nProvides a survey of descriptors for matching pairs of image patches.\\n\\nCons\\nIt is not clear what the purpose of the paper is. The paper compares several learning algorithms on the task of what essentially seems like clustering image patches to find their correspondences. The ground truth correspondences of the dataset were found by clustering the image patches to find correspondences... In this paper, simple clustering methods were not compared to such as kmeans or sparse coding which are less complicated models than RBMs and are meant for finding correspondences. Additionally, training in a supervised way makes much more sense for finding correspondences.\\n\\nIt is not clear from the paper alone what is considered at match between descriptors? Is it the distance being below a threshold, the pair of descriptors being closer than any other pair of descriptors, etc.?\\n\\nThe preprocessing of the image patches seems different for each method. This could lead to wildly different scales of the input pixels and thus the corresponding representations of the various methods.\\n\\nIn section 3.3 it is mentioned that it is surprising that L1 normalization works better because sparsity hurts classification typically. However, the sparsity in the paper is directly before the distance calculation, and not before being fed as input to a classifier which is a different setup and would thus be expected to behave differently with sparsity. This is the typical setup in which sparsity is found to hurt classification performance because information is being thrown away before the classifier is used.\", \"novelty_and_quality\": [\"This paper is not novel in that it is survey of prior work applied to matching descriptors. It is well written but does not appear to apply to a wide audience as other papers have done a comparison of unsupervised methods in the past, for example:\", \"A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. In Proc. AISTATS, 2011.\", \"A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quanti- zation. In Proc. ICML, 2011.\"]}" ] }
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
conferenceOral-iclr2013-conference
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
ICLR.cc/2013/conference
2013
{ "note_id": [ "mmLgxpNpu1xGP", "kGNDPAwn1jGUc", "EEBiEfDQjdwft", "1WIBWMxZeG4UP", "fftnhM9InbLMv", "CC5h3a1ESBCav" ], "note_type": [ "comment", "review", "comment", "review", "comment", "review" ], "note_created": [ 1363216980000, 1362214560000, 1363217640000, 1362368160000, 1363217640000, 1362321540000 ], "note_signatures": [ [ "Guillaume Alain, Yoshua Bengio" ], [ "anonymous reviewer f62a" ], [ "Guillaume Alain, Yoshua Bengio" ], [ "anonymous reviewer 7ffb" ], [ "Guillaume Alain, Yoshua Bengio" ], [ "anonymous reviewer 4222" ] ], "structured_content_str": [ "{\"reply\": \"> It's interesting that in the classical CAE, there is an implicit contractive effect on g() via the side effect of tying the weights whereas in the form of the DAE presented, g() is explicitly made contractive via r(). Have you investigated the effective difference?\\n\\nNot really, no. The results that we have for general autoencoders r does not even assume that r is decomposable into two meaningful steps (encode, decode). However, in our experiments we found better results (due to optimization issues) with untied weights (and a contractive or denoising penalty on the whole of r(.)=decoder(encoder(.)) rather than just the encoder).\\n\\nWe have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low.\\n\\n\\n> Although in the caption, you mention the difference between upper/lower and left/right subplots in Fig 4, I would prefer those (model 1/model 2) to be labeled directly on the subplots, it would just make for easier parsing.\\n\\nThe section with Figure 4 has been edited and we are now showing only two plots.\\n\\nWe have made all the suggested changes regarding typos and form. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.\"}", "{\"title\": \"review of What Regularized Auto-Encoders Learn from the Data Generating Distribution\", \"review\": \"Many unsupervised representation-learning algorithms are based on minimizing reconstruction error. This paper aims at addressing the important questions around what these training criteria actually learn about the input density.\", \"the_paper_makes_two_main_contributions\": \"it first makes a link between denoising autoencoders (DAE) and contractive autoencoders (CAE), showing that the DAE with very small Gaussian corruption and squared error is actually a particular kind of CAE (Theorem 1). Then, in the context of the contractive training criteria, it answers the question 'what does an auto-encoder learn about the data-generating distribution': it estimates both the first and second derivatives of the log-data generating density (Theorem 2) as well as other various local properties of this log-density. An important aspect of this work is that, compared to previous work that linked DAEs to score matching, the results in this paper do not require the reconstruction function of the AE to correspond to the score function of a density, making these results more general.\", \"positive_aspects_of_the_paper\": [\"A pretty theoretical paper (for representation learning) but well presented in that most of the heavy math is in the appendix and the main text nicely presents the key results\", \"Following the theorems, I like the way in which the various assumptions (perfect world scenario) are gradually pulled away to show what can still be learned about the data-generating distribution; in particular, the simple numerical example (which could be easily re-implemented) is a nice way to connect the abstractness of the result to something concrete\"], \"negative_aspects_of_the_paper\": \"* Since the results heavily rely on derivatives with respect to the data, they only apply to continous data (extensions to discrete data are mentioned as future work)\\n\\nComments, Questions\\n--------\\n\\nIt's interesting that in the classical CAE, there is an implicit contractive effect on g() via the side effect of tying the weights whereas in the form of the DAE presented, g() is explicitly made contractive via r(). Have you investigated the effective difference?\\n\\nMinor comments, typos, etc\\n--------------------------\\nFig 2 - green is not really green, it's more like turquoise\\n - 'high-capcity' -> 'high-capacity'\\n - the figure makes reference to lambda but at this point in the paper, lambda is yet to be defined\\n\\nobjective function for L_DAE (top of p4) - last term o() coming from the Taylor expansion is explicitly discussed in appendix (and perhaps obvious here) but is not explicitly defined in the main text\\n\\nRight before 3.2.4 'high dimensional <data> (such as images)' \\n\\nAlthough in the caption, you mention the difference between upper/lower and left/right subplots in Fig 4, I would prefer those (model 1/model 2) to be labeled directly on the subplots, it would just make for easier parsing.\"}", "{\"reply\": \"> I think this is quite an important result. even though limited to this specific type of model\\n\\nAs argued in a previous response (to reviewer 4222), we believe that at least at a qualitative level the same is true in general of regularized auto-encoders. We copy here the response: \\n'We have worked on the denoising/contracting auto-encoders with squared error because we were able to prove our results with them, but we believe that other regularized auto-encoders (even those with discrete inputs) also estimate something related to the score, i.e., the direction in input space in which probability increases the most. The intuition behind that statement can be obtained by studying figure 2: the estimation of this direction arises out of the conflict between reconstructing training examples well and making the auto-encoder as constant (regularized) as possible.'\\nWe have added a brief discussion in the conclusion about how we believe these results could be extended to models with discrete inputs, following the tracks of ratio matching (Hyvarinen 2007).\\n\\nWe have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low.\\n\\n> I find the experiment shown in Figure 4 somewhat confusing.\\n\\nWe have addressed this concern that many of the reviewers had. The whole section 3.2.3 has been edited and we decided to remove two of the plots which may have introduced confusion. Reviewers seem to focus on the difference between the two models and wanted to know why the outcomes were different. They were only different because of the non-convexity of the problem and the dependance on initial conditions (along with the random noise used for training). At the end of the day, the point is that the vector field points in the direction of the energy gradient, and that is illustrated nicely by the two plots left (far and close distance). \\n\\n\\n> Section 3.2.4. I am not clear what is the importance of this section. It seems to state the relationship between the score and reconstruction derivative.\\n\\nAre you referring to section 3.3 ? If you are indeed referring to section 3.2.4, the idea there is that it is possible to start the investigation from a trained DAE where the noise level for the training is unknown to us (but it is known by the person who trained the DAE). In that case, we would be in a situation where we the best that could be done was to recover the energy function gradient up to a scaling constant.\\n\\n\\n> Is it possible to link these results and theory to other forms of auto-encoders, such as sparse auto-encoders or with different type of non-linear activation functions? It would be very useful to have similar analysis for more general types of auto-encoders too.\\n\\nSee our first response above.\\n\\nPlease also have a look at a new short section (now identified as 3.2.5) that we just added in.\"}", "{\"title\": \"review of What Regularized Auto-Encoders Learn from the Data Generating Distribution\", \"review\": \"The paper presents a method to analyse how and what the auto-encoder models that use reconstruction error together with a regularisation cost, are learning with respect to the underlying data distribution. The paper focuses on contractive auto-encoder models and also reformulates denoising auto-encoder as a form of contractive auto-encoder where the contraction is achieved through regularisation of the derivative of reconstruction error wrt to the input data. The rest of the paper presents a theoretical analysis of this form of auto-encoders and also provides couple of toy examples showing empirical support.\\n\\nThe paper is easy to read and the theoretical analysis is nicely split between the main paper and appendices. The details in the main paper are sufficient for the reader to understand the concept that is presented in the paper.\\n\\nThe theory and empirical data show that one can recover the true data distribution if using contractive auto-encoders of the given type. I think this is quite an important result. even though limited to this specific type of model, quantitative analysis of generative capabilities of auto-encoders have been limited.\\n\\nI find the experiment shown in Figure 4 somewhat confusing. The text suggests that the only difference between the two models is their initial conditions and optimisation hyper parameters. Is the main reason due to initial conditions or hyper parameters? Which hyper parameters? Is the difference in initial condition just a different random seed or different type of initialisation of the network? I think this requires more in depth explanation. Is it normal to expect such different solutions depending on initial conditions?\\n\\nSection 3.2.4. I am not clear what is the importance of this section. It seems to state the relationship between the score and reconstruction derivative.\\n\\nIs it possible to link these results and theory to other forms of auto-encoders, such as sparse auto-encoders or with different type of non-linear activation functions? It would be very useful to have similar analysis for more general types of auto-encoders too.\"}", "{\"reply\": \"> It would be good to compare these plots with other regularizers and show that getting log(p) for contractive one is somehow advantageous.\\n\\nWe have worked on the denoising/contracting auto-encoders with squared error because we were able to prove our results with them, but we believe that other regularized auto-encoders (even those with discrete inputs) also estimate something related to the score, i.e., the direction in input space in which probability increases the most. The intuition behind that statement can be obtained by studying figure 2: the estimation of this direction arises out of the conflict between reconstructing training examples well and making the auto-encoder as constant (regularized) as possible.\\n\\nOther regularizers (e.g. cross-entropy) as well as the challenging case of discrete data are in the back of our minds and we would very much like to extend mathematical results to these settings as well.\\nWe have added a brief discussion in the conclusion about how we believe these results could be extended to models with discrete inputs, following the tracks of ratio matching (Hyvarinen 2007).\\n\\nWe have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low.\\n\\n> it would be good to know something not in the limit of penalty going to zero\\n\\nWe agree. We did a few artificial data experiments. In fact, we ran the experiment shown in section 3.2.2 using values of lambda ranging from 10^-6 to 10^2 to observe the behavior of the optimal solutions when the penalty factor varies smoothly. The optimal solution degrades progressively into something comparable to what is shown in Figure 2. It becomes a series of increasing plateaus matching the density peaks. Regions of lesser density are used to 'catch up' with the fact that the reconstruction function r(x) should be relatively close to x.\\n\\n\\n> Figure 4. - 'Top plots are for one model and bottom plots for another' - what are the two models? It would be good to specify this in the figure, e.g. denosing autoencoders with different initial conditions and parameter settings.\\n\\nWe have addressed this concern that many of the reviewers had. The whole section 3.2.3 has been edited and we decided to remove two of the plots which may have introduced confusion. Reviewers seem to focus on the difference between the two models and wanted to know why the outcomes were different. They were only different because of the non-convexity of the problem and the dependance on initial conditions (along with the random noise used for training). At the end of the day, the point is that the vector field points in the direction of the energy gradient, and that is illustrated nicely by the two plots left (far and close distance). \\n\\n> Section 3.2.5 is important and should be written a little more clearly.\\n\\nWe have reworked that section (now identified as 3.2.6), to emphasize the main point: whereas Vincent 2011 showed that denoising auto-encoders with a particular form estimated the score, our results extend this to a very large family of estimators (including the non-parametric case). The section also shows how to interpret Vincent's results so as to show that any auto-encoder whose reconstruction function is a derivative of an energy function can be shown to estimate a score. Instead, the rest of our paper shows that we achieve an estimator of the score even without that strong constraint on the form of the auto-encoder.\\n\\n> I would suggest deriving (13) in the appendix directly from (11) without having the reader recall or read about Euler-Lagrange equations\\n\\nWe must admit to not having understood the hints that you have given us. If indeed there was such a way to, as you say, spare the reader the headaches of Euler-Lagrange, we agree that it would be an interesting approach. \\n\\n> You don't actually derive formulas the second moments in the appendix like you do for the first moment, you mean they can similarly be derived?\\n\\nYes, an asymptotic expansion can be derived in a similar way for the second moment. That derivation is 2 to 3 times longer and is not very useful in the context of this paper.\\n\\nPlease also have a look at a new short section (now identified as 3.2.5) that we just added in.\"}", "{\"title\": \"review of What Regularized Auto-Encoders Learn from the Data Generating Distribution\", \"review\": \"This paper shows that we can relate the solution of specific autoencoder to the data generating distribution. Specifically solving for general reconstruction function with regularizer that is the L2 penalty of reconstruction contraction relates the reconstruction function derivative of the data probability log likelihood. This is in the limit of small regularization. The paper also shows that in the limit of small penalty this autoencoder is equivalent to denoising autoencoder with small noise.\\n\\nSection 3.2.3: You get similar attractive behavior using almost any autoencoder with limited capacity. The point of your work is that with the specific form of regularization - square norm of contraction of r - the r(x)-x relates to derivative of log probability (proof seem to require it - it would be interesting to know what can be said about other regularizers). It would be good to compare these plots with other regularizers and show that getting log(p) for contractive one is somehow advantageous. Otherwise this section doesn't support this paper in any way.\\n\\nAs authors point out, it would be good to know something not in the limit of penalty going to zero. At least have some numerical experiments, for example in 1d or 2d. \\n\\nFigure 4. - 'Top plots are for one model and bottom plots for another' - what are the two models? It would be good to specify this in the figure, e.g. denosing autoencoders with different initial conditions and parameter settings.\\n\\nSection 3.2.5 is important and should be written a little more clearly. \\n\\nI would suggest deriving (13) in the appendix directly from (11) without having the reader recall or read about Euler-Lagrange equations, and it might actually turn out to be simpler. Differentiating the first term with r(x) gives r(x)-x. For the second term one moves the derivative to the other size using integration by parts (and droping the boundary term) and then just applying it to the product p(x)dr/dx resulting in (13).\\n\\nMinor - twice you say in the appending that the proof is in the appendinx (e.g. after statement of theorem 1)\\n\\nThe second last sentence in the abstract is uncomfortable to read.\\n\\nThis is probably not important, but can we assume that r given by (11) actually has a taylor expansion in lambda? (probably, but in the spirit of prooving things).\\n\\nYou don't actually derive formulas the second moments in the appendix like you do for the first moment, you mean they can similarly be derived?\"}" ] }
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
conferencePoster-iclr2013-conference
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
ICLR.cc/2013/conference
2013
{ "note_id": [ "x9pbTj7Nbg9Qs", "__krPw9SreVyO", "UNlcNgK7BCN9v", "zOUdY11jd_zJr", "NNd3mgfs39NaH", "MAHULigTUZMSF", "pn6HDOWYfCDYA", "BSYbBsx9_5Suw" ], "note_type": [ "review", "review", "review", "review", "review", "comment", "review", "review" ], "note_created": [ 1361902200000, 1363749480000, 1363840020000, 1362593760000, 1362361200000, 1363043520000, 1362779100000, 1361946900000 ], "note_signatures": [ [ "Yoshua Bengio" ], [ "Ross Goroshin" ], [ "Ross Goroshin" ], [ "anonymous reviewer 5bc2" ], [ "anonymous reviewer 5955" ], [ "Sixin Zhang" ], [ "Rostislav Goroshin, Yann LeCun" ], [ "anonymous reviewer 3942" ] ], "structured_content_str": [ "{\"review\": [\"This is a cool investigation in a direction that I find fascinating, and I only have two remarks about minor points made in the paper.\", \"Regarding the energy-based interpretation (that reconstruction error can be thought of as an energy function associated with an estimated probability function), there was a recent result which surprised me and challenges that view. In http://arxiv.org/abs/1211.4246 (What Regularized Auto-Encoders Learn from the Data Generating Distribution), Guillaume Alain and I found that denoising and contractive auto-encoders (where we penalize the Jacobian of the encoder-decoder function r(x)=decode(encode(x))) estimate the *score* of the data generating function in the vector r(x)-x (I should also mention Vincent 2011 Neural Comp. with a similar earlier result for a particular form of denoising auto-encoder where there is a well-defined energy function). So according to these results, the reconstruction error ||r(x)-x||^2 would be the magnitude of the score (derivative of energy wrt input). This is quite different from the energy itself, and it would suggest that the reconstruction error would be near zero both at a *minimum* of the energy (near training examples) AND at a *maximum* of the energy (e.g. near peaks that separate valleys of the energy). We have actually observed that empirically in toy problems where one can visualize the score in 2D.\", \"Regarding the comparison in section 5.1 with the contractive auto-encoder, I believe that there is a correct but somewhat misleading statement. It says that the contractive penalty costs O(d * d_h) to compute whereas the saturating penalty only costs O(d_h) to compute. This is true, but since computing h in the first place also costs O(d * d_h) the overhead of the contractive penalty is small (it basically doubles the computational cost, which is much less problematic than multiplying it by d as the remark could lead a naive reader to believe).\"]}", "{\"review\": \"We thank the reviewers for their constructive comments.\\n\\nA revised version of the paper has been submitted to arXiv and should be available shortly. \\n\\nIn addition to minor corrections and additions throughout the paper, we have added three new subsections:\\n\\n(1) a potential extension of the SATAE framework to include differentiable\\nfunctions without a zero-gradient region\\n\\n(2) experiments on the CIFAR-10 dataset\\n\\n(3) future work.\\n\\nWe have also expanded the introduction to better motivate our approach.\"}", "{\"review\": \"The revised paper is now available on arXiv.\"}", "{\"title\": \"review of Saturating Auto-Encoder\", \"review\": \"Although this paper proposes an original (yet trivial) approach to regularize auto-encoders, it does not bring sufficient insights as to why saturating the hidden units should yield a better representation. The authors do not elaborate on whether the SATAE is a more general principle than previously proposed regularized auto-encoders(implying saturation as a collateral effect) or just another auto-encoder in an already well crowded space of models (ie:Auto-encoders and their variants). In the last years, many different types of auto-encoders have been proposed and most of them had no or little theory to justify the need for their existence, and despite all the efforts engaged by some to create a viable theoretical framework (geometric or probabilistic) it seems that the effectiveness of auto-encoders in building representations has more to do with a lucky parametrisation or yet another regularization trick.\\n\\nI feel the authors should motivate their approach with some intuitions about why should I saturate my auto-encoders, when I can denoise my input, sparsify my latent variables or do space contraction? It's worrisome that most of the research done for auto-encoders has mostly focused in coming up with the right regularization/parametrisation that would yield the best 'filters'. Following this path will ultimately make the majority of people reluctant to use auto-encoders because of their wide variety and little knowledge about when to use what. The auto-encoder community should backtrack and clear the intuitive/theoretical noise left behind, rather than racing for the next new model.\"}", "{\"title\": \"review of Saturating Auto-Encoder\", \"review\": \"This paper proposes a novel kind of penalty for regularizing autoencoder training, that encourages activations to move towards flat (saturated) regions of the unit's activation function. It is related to sparse autoencoders and contractive autoencoders that also happen to encourage saturation. But the proposed approach does so more directly and explicitly, through a 'complementary nonlinerity' that depends on the specific activation function chosen.\", \"pros\": [\"a novel and original regularization principle for autoencoders that relates to earlier approaches, but is, from a certain perspective, more general (at least for a specific subclass of activation functions).\", \"paper yields significant insight into the mechanism at work in such regularized autoencoders also clearly relating it to sparsity and contractive penalties.\", \"provides a credible path of explanation for the dramatic effect that the choice of different saturating activation functions has on the learned filters, and qualitatively shows it.\"], \"cons\": [\"Proposed regularization principle, as currently defined, only seems to make sense for activation functions that are piecewise linear and have some perfectly flat regions (e.g. a sigmoid activation would yield no penalty!) This should be discussed.\", \"There is no quantitative measure of the usefulness of the representation learned with this principle. The usual comparison of classification or denoising performance based on the learned features, with those obtained with other autoencoder regularization principles would be a most welcome addition.\"]}", "{\"reply\": \"'complementary nonlinerity' is very interesting, it makes me think of wavelet, transforming autoencoder. one question i was asking is how to make use of the information that's 'thrown' away (say after applying the nonlinearity, or the low path filter), or maybe those information are just noise? in saturating AE, the complementary nonlinerity is the residue of the projection (formula 1). What's that projective space? why the projection is defined elementwise (cf. softmax -> simplex)? how general can the non-linearity be extended for general signal representation (say Scattering Convolution Networks) , and classfication. I am just curious ~\"}", "{\"review\": \"In response to 5bc2: the principle behind SATAE is a unification of the principles behind sparse autoencoders (and sparse coding in general) and contracting autoencoders.\\n\\nBasically, the main question with unsupervised learning is how to learn a contrast function (energy function in the energy-based framework, negative log likelihood in the probabilistic framework) that takes low values on the data manifold (or near it) and higher values everywhere else. \\n\\nIt's easy to make the energy low near data points. The hard part is making it higher everywhere else. There are basically 5 major classes of methods to do so: \\n1. bound the volume of stuff that can have low energy (e.g. normalized probabilistic models, K-means, PCA); \\n2 use a regularizer so that the volume of stuff that has low energy is as small as possible (sparse coding, contracting AE, saturating AE); \\n3. explicitly push up on the energy of selected points, preferably outside the data manifold, often nearby (MC and MCMC methods, contrastive divergence); \\n4. build local minima of the energy around data points by making the gradient small and the hessian large (score matching); \\n5. learn the vector field of gradient of the energy (instead of the energy itself) so that it points away from the data manifold (denoising autoencoder). \\n\\nSATAE, just like contracting AE and sparse modeling falls in category 2.\\n\\nBasically, if you auto-encoding function is G(X,W), X being the input, and W the trainable parameters, and if your unregularized energy function is E(X,W) = ||X - G(X,W)||^2, if G is constant when X varies along a particular direction, then the energy will grow quadratically along that direction (technically, G doesn't need to be constant, but merely to have a gradient smaller than one). The more directions G(X,W) has low gradient, the lower the volume of stuff with low energy.\\n\\nOne advantage of SATAE is its extreme simplicity. You could see it as a version of Contracting AE cut down to its bare bones.\\n\\nWe can always obfuscate this simple principle with complicated math, but how would that help? At some point it will become necessary to make more precise theoretical statements, but for now we are merely searching for basic principles.\"}", "{\"title\": \"review of Saturating Auto-Encoder\", \"review\": \"This paper proposes a regularizer for auto-encoders with\\nnonlinearities that have a zegion with zero-gradient. The paper\", \"mentions_three_nonlinearities_that_fit_into_that_category\": \"shrinkage,\\nsaturated linear, rectified linear.\\n\\nThe regularizer basically penalizes how much the activation deviates\\nfrom saturation. The insight is that at saturation, the unit conveys\\nless information compared to when it is in a non-saturated region.\\n\\nWhile I generally like the paper, I think it could be made a lot\\nstronger by having more experimental results showing the practical\\nbenefits of the nonlinearities and their associated regularizers.\\n\\nI am particularly interested in the case of saturated linear\\nfunction. It will be interesting to compare the results of the\\nproposed regularizer and the sparsity penalty. More concretely, f(x) =\\n1 would incur some loss under the conventional sparsity; whereas, the\\nnew regularizer does not. From the energy conservation point of view,\\nit is not appealing to maintain the neuron at high activation, and the\\nnew regularizer does not capture that. But it may be the case that,\\nfor a network to generalize, we need to only restrict the neurons to\\nbe in the saturation regions. Any numerical comparisons on some\\nclassification benchmarks would be helpful.\\n\\nIt would also be interesting that the method is tested on a\\nclassification dataset to see if it makes a different to use the new\\nregularizers.\"}" ] }
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
submitted, no decision
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
ICLR.cc/2014/conference
2014
{ "note_id": [ "fjB3fGG430jr7", "8nQMn2_oFgvXP", "GqWRGvurSDqeX", "c_ej7ww5zf_BP", "dl7nPGGwxql6h", "n6U96C27ST6iZ", "qVWJL1t42vKik", "s5xX4vEkuAnIR", "zhkyU33YqKzMj", "_xtmg4TcjB_5l", "26C7gYY01ohqX", "ttNb0MnzpZ0_v" ], "note_type": [ "review", "review", "review", "comment", "review", "review", "review", "review", "review", "comment", "comment", "review" ], "note_created": [ 1392105960000, 1390216500000, 1390529820000, 1392071280000, 1389921060000, 1391861220000, 1390448460000, 1392137460000, 1390216500000, 1392750600000, 1392750720000, 1392750720000 ], "note_signatures": [ [ "anonymous reviewer 67e9" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "David Krueger" ], [ "anonymous reviewer 355b" ], [ "Alexandre Dalyac" ], [ "anonymous reviewer 2699" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ] ], "structured_content_str": [ "{\"title\": \"review of On the number of inference regions of deep feed forward networks with piece-wise linear activations\", \"review\": \"This is a very interesting and relevant paper, attempting to prove that\\na deep neural net composed of rectified linear units and a linear output\\nlayer is potentially significantly more powerful than a single layer\\nnet with the same number of units.\\n\\nA strength of the paper is its constructive approach, building up\\nan understanding of the expressiveness of a model, in terms of the\\nnumber of regions it represents. It is also notable that the authors pull \\nin techniques from computational geometry for this construction.\\n\\nBut there are several problems with the paper. The writing is unclear,\\nand overall the paper feels like a preliminary draft, not ready for prime \\ntime.\\n\\nThe introduction can be tightened up. A more significant comment\\nconcerns the important attempt to give some intuition, at the top of\\npage 3. The paragraph starting with 'Specifically' doesn't make sense\\nto me. How can all but one hidden unit be 0 for different intervals\\nalong the real line? Each hidden unit will be active above (or below)\\nits threshold value, so many positive ones will be active at the same\\ntime. We could compose two hidden units to construct one that is only\\nactive within an interval in R, but I don't see how to do this in one\\nlayer. I must be missing something simple here. \\n\\nI think the basic intuition is that a higher-level unit can act as an\\nOR of several lower level regions, and gain expressivity by repeating\\nits operation on these regions, is the idea. But the construction is\\nnot clear. Also, one would expect that the ability to represent AND\\noperations would also lead to significant expressivity gains. Is this\\nalso part of the construction? These basic issues should be clarified.\\n\\nIn addition, it would be very helpful to have some concrete example\\nof a function that can be computed by a deep net of the sort analyzed\\nhere, which cannot be computed by a shallow net of the same size. \\nAs it stands the characterization of the difference between the deep \\nand equivalent one-layer network is too abstract to be very compelling.\\n\\nI also found the proof of Theorem 8 very hard to understand. This\\nis not a key problem, as the authors do a good job building up to\\nthis main theorem, in sections 3 and 4. But it does mean that I am\\nnot confident that the proof is correct.\\n\\nFinally, I would recommend exploring the relationship between the ideas\\nin this paper and the extensive work in circuit complexity that deals \\nwith multi-level circuits, for example the paper by Hajnal et al on 'Threshold circuits of bounded depth'.\"}", "{\"review\": \"David, thanks a lot for your comments. We will definitely consider them in the next version of the paper. I have to wonder however, which version of the paper did you look at? Version 2 (which is online since 6th Jan) does not have an equation 13, while the first draft did. My hope is that the version 2, that is online, already\\nanswers most of your questions. Note that we have a different construction that changes much of the flow of the paper as well as \\nthe final result.\", \"i_would_quickly_try_to_summarize_some_specific_answers_to_your_questions\": \"(1) Regarding the scope of the paper. We do not generalize Zaslavsky's Theorem to scenarios that arise in deep models, as you suggest. For a single layer rectifier MLP it turns out that each hidden unit partitions the space using a hyperplane. This means that the number of regions we have is the number of regions you can get from an arrangement of $n$ hyperplanes. The answer to this question is given by Zaslavsky's Theorem. This offers as an upper bound to the maximal number of regions a single hidden layer model has. To make our point (that deep models are more efficient) we only need to show now that there exist deep models that result in more regions. We do so by constructing a specific deep model for which we can compute lower bound to the number of linear regions it generates. Asymptotically this lower bound is much larger than the maximal number of regions a single layer model has. \\n\\n (2) 'region'/'region of linearity'/'input space partition space' means the same thing. It is a connected region of the input space U, such that within this region every hidden unit either stays positive or is equal to 0. Or in other words, within this region the MLP is a linear function with respect to its input.\\n\\n (3) Regarding figure 2, we do use a new construction now that is depicted in more detail. \\n\\n (4) Proposition 8 and 9 have been replaced completely. Proposition 8 describes a lower bound on how many new regions you can get within a layer, given the number of regions you have generated up to that layer. Proposition 9 used proposition 8 to give a lower bound on the total number of regions of a deep model.\\n\\n (5) Regarding the statement about the output activation function. We simply meant to say that it is sufficient to study models with linear output activation function to get a sense for MLPs with sigmoids or softmax output activations. To see this do the following. Consider you have some rectifier MLP with sigmoids at the output layer, and some target function $f$. Let $s$ stand for the sigmoid activation function. To get a sense of how well we can approximate $f$, we can look at how well we can approximate $s^{-1}(f)$ with a rectifer MLP that has a linear output activation function.\"}", "{\"review\": \"Your point is well taken and we are well aware of it, motivating current work that would provide a stronger bound. Within that new construction, even for reasonably models like 3 layers with only 2 n_0 units on each layer, we still get more regions in the deep model than in the shallow one. In that new theorem we count every region of the deep model. In the proof presented in the paper, we only look at a lower bound on the maximal number of regions for the deep model, a bound that is not necessarily tight. The reason is that we only count certain regions and ignore the rest. Unfortunately counting all regions for the deep model construction given in the paper is difficult.\\n\\nThe second comment we would like to make is that we recover a high ratio of n to n_0 if the data live near a low-dimensional manifold (effectively like reducing the input size n_0). One-layer models can reach the upper bound of regions only by spanning all the dimensions of the input. That is, for any subspace of the input we can not concentrate most of the regions in that subspace. If, as commonly assumed, data live near a lower dimensional manifold, then we care only about the number of regions we can get in the directions of that manifold. One way of thinking about it is if you do a PCA on your data. You will have a lot of directions (say on MNIST) where few variations are seen in the data (and if you see some, it is mostly due to noise that you want to ignore). In such a situation you care about how many regions you have within the directions in which the data do change. In such situations n >> n_0, making the proof of the paper more relevant.\\n\\nLastly, we'll argue that the paper is making an important contribution to the asymptotic case but covers neural network architectures (with rectifiers) that were not previously covered. The paper is showing an important representational advantage from using deep models versus shallow ones, at least asymptotically. This kind of result is important to motivate the importance of deep models. Although there are previous results with the same objective, they have not been with the kind of commonly used non-linearity (rectified linear units) that we are able to cover here.\"}", "{\"reply\": \"Thank you for your comments Reviewer 355b.\\n\\n Regarding the number of parameters, indeed it is not clear what is a fair measure of capacity for these models. \\n\\n We can easily show that our results hold even when we enforce the number of parameters to be the same. We've added a note about this in the discussion. Specifically, to do so, we can look at how fast the ratio between the number of regions and the number of parameters grows for deep versus shallow models. We can see that in this situation we have something of the form $Omega( {n/n_0}^{k-1} n^{n_0-2}$ for deep models and $O(k^{n_0} n^{n_0-1})$ for a shallow model. The ratio still grows exponentially faster with $k$ for deep models and polynomially with $n$ when $n_0$ is fixed. \\n\\n Further more, as a comment to your question, we argue that the number of linear region is a property that correlates with the representational power of the model, while the number of units (or parameters) is a measure of capacity. Our paper attempts to show that for the same capacity deep models can be more expressive. \\n\\n Regarding the generalization power of a deep model (versus a shallow one), this can be a tricky question to answer. Issues with the learning process only makes the answer even more complicated. Addressing all of these things together, while it is very important for deep learning, is much more than what our paper is trying to do. \\n \\n For now our analysis is limited to the representational power of a deep model, which is different from its generalization power. The question we are trying to answer is the following: 'Consider the family of all possible shallow model of a certain size (capacity) and then pick from this family of functions the one that is closes to some function $f$. Do the same for the family of deep models of same capacity. Which of these two picked models are doing a better job at approximating $f$ ?' \\n \\n Of course if we do not bound the capacity within each family the answer would that both approximate $f$ arbitrarily well and then we can not distinguished between them. This is true because we know both are universal approximators. \\n\\n Generalization, for piece-wise linear deep models, comes from the fact that while we can have more linear regions, these linear regions are restricted in some sense. Deep models rely on symmetry, repeating the partition done on some higher layer in different regions of the input. That means that the functions they can represent efficiently are highly structured, even though they look arbitrarily complicated. \\n\\n In other words, deep models can approximate much better then shallow models only certain families of functions (those that have a particular structure). If we try to approximate white noise, because it has no structure, deep models will not fare better than shallow ones. \\n \\n Analyzing formally this idea is a very interesting direction that we are\\n considering for future work. \\n\\n And as a final note, we have submitted to arxiv a new version of the paper. It will be available starting with Tue, 11 Feb 2014 01:00:00 GMT.\", \"let_us_quickly_summarize_the_main_changes\": \"We have added section 4.1 which is a construction that only works for\\n $n=2n_0$. In this construction we pair the hidden units at each layer, except the last one, and make each pair to behave as the absolute value of some coordinate. This will make each quadrant of the input space of the layer to the first quadrant, where the input space of the layer is the image of the input layer through all the layers below. However we partition the first quadrant in the higher layer, this partition will be repeated in all the other quadrants resulting in $Omega(2^{n_0(k-1)} n^{n_0})$ linear regions. This bound shows that deep models can be more efficient than shallow ones even for a small number of layers (like 2).\\n\\n We have added paragraph (paragraph 2 in the conclusion) that talks about forcing the models to have the same number of parameters rather than the same number of units. \\n\\n Additional changes from the last submission (Mon 27 Jan):\\n\\n We have added a paragraph (paragraph 3) talking about how $n_0$ is affected by the observation that, in real tasks, data might live near a manifold of much lower dimension. \\n\\n We have added to paragraphs in the introduction (paragraph 3 and 4 on page 2) motivating our picked measure of flexibility, namely the number of linear regions.\"}", "{\"review\": \"I like the topic of this paper, but I found it very difficult to read.\\n\\nAlthough I don't have many specific suggestions along these lines, I feel like it could probably be made much easier to understand if the proofs were introduced with more simple English explanations of how the proofs work. It seems like the main issue is generally showing how Zaslavsky's Theorem can be applied to specific scenarios that arise using deep networks. \\n\\nI think you should define what you mean by a 'region' or 'region of linearity' or 'input space partition region' and use one term consistently (I think these are all the same thing?)\\n\\nIn figure 2, I would add plots of what lower levels look like, so we can see how the final plot emerges. \\n\\nI don't follow the proofs of Proposition 8 or 9.\\n\\nIn proposition 8, I think the first summation should be up to n_1 and summing n_2 choose k?\\n\\nI would move the definitions of big-O, Omega and Theta notation to the preliminaries.\\n\\nYou need to add Zaslavsky to the citations.\\n\\nI think equation (13) has a typo. Should 2n_0 be 2^{n_0}? \\n\\nI don't follow the part about how linearity is not a restriction in the Discussion; in fact, I'm not sure exactly what is meant by that statement.\", \"little_edits\": \"\", \"proof_of_lemma_4\": \"'the regions of linear behavior [...] ARE'\\n\\nbelow equation (3) 'the first layer has TWO operational modes'\", \"later_that_paragraph\": \"'gradient equal to vr_i' should just be r_i, I think.\", \"proposition_6\": \"your proof sketch only demonstrates the formula for r(A), not b(A); you should say that.\", \"section_4\": \"you define but never use b(n_0,n_1). You don't specify, but I assume that r and u mentioned below are the r(n_0,n_1), u(n_0,n_1) you define. I would use n = n_1 and d = n_0 in this definition, for clarity.\", \"discussion\": \"say 'e.g.' or 'for example', but not 'for e.g.'\"}", "{\"title\": \"review of On the number of inference regions of deep feed forward networks with piece-wise linear activations\", \"review\": \"This paper studies the representation power of deep nets with piecewise linear activation functions. The main idea is to show that deep net with the same number units can represent (in terms of generating linear regions) more complex mappings than the shallow model.\", \"this_is_an_interesting_paper\": \"it leverages known results in arranging hyperplanes in the space and then cleverly show how those can be used to show how linear regions can be learnt by multiple layers.\\n\\nWhile the theoretical results seem right, I could not help but wondering whether the comparison is 'fair': using the same number of units does not necessarily imply the deep net is 'restricted' in any way -- in fact, the deep net has more parameters than the shallow model. Is not that enough to argue (at least qualitatively) that deep net must have more representation power than the shallow model? (Of course, the value of the analysis is to show more precisely how many regions there are.)\\n\\nAdditionally, the learning process does not necessarily mean that the deep net indeed constructs that many regions --- thus, purely comparing the number of regions is unlikely to explain how well the model is able to generalize well than shallow model or how the model is prevented from overfitting.\\n\\nNonetheless, the paper presents a novel direction to pursuit to instigate further research.\"}", "{\"review\": \"sorry if I'm wrong, but it seems to me that typically, k is never large, and if n[0] is only ever large when n is also large, such that n/n[0] = O(1). in those circumstances, is there a significant difference in the number of linear regions between the 2 architectures?\\ufeff\"}", "{\"title\": \"Please pick a different title. These are feed-forward networks so calling the regions 'inference regions' doesn't make sense.\", \"review\": \"The authors of this paper analyse feed-forward networks of linear rectifier units (RELUs) in terms of the number of regions in which they act linearly. They give an upper bound on the number of regions for networks with a single hidden layer based on known results in geometry, and then show how deeper networks can have a much larger number of regions by constructing examples. The constructions form the main novel technical contribution, and they seem non-trivial and interesting.\\n\\nOverall I think this is a good and interesting paper. It is well written with the notable exception of the proof of theorem 8 and the latter half of the introduction. In most spots, the math is precise and accessible (to me anyway), the results nicely broken into lemmas, and the diagrams are very useful for providing intuition.\\n\\nThese results can be interpreted as separating networks with a single hidden layer from deep networks in terms of the types of functions they can efficiently compute. However, number of linear regions is a pretty abstract notion, and it isn't obvious what these results can say about the expressibility by neural nets of functions that we can actually write down. Do you know of any natural examples of functions that require a finite but super-exponential number of regions? \\n\\nUnfortunately, region counting can't say anything about the representability of functions defined on such input spaces of the form S^n0 where S is a finite set, since there are only |S|^n0 input values, and |S|^n0 << n^n0 = region upper bound.\", \"about_theorem_8\": \"After hours trying to understand the proof of Theorem 8 I gave up. However, I was able to use Prop 7, and intuition provided from the diagrams, to prove a slightly different version of Thm 8 myself, and so I think the result is correct, and the proof is probably trying to describe basically the same thing I came up with (except my proof went from the top layer down, instead of the bottom layer up). So while I don't doubt the correctness of the statement of Thm 8, but the write-up of the proof of Thm 8 needs to be completely redone to be understandable and intuitive. I don't think you need to make it 100% formal (Prop 7 isn't completely formal either, but it's fine as is), but you need to make it possible to understand with a reasonable amount of effort.\", \"detailed_comments\": \"---\", \"abs\": \"Why is it 'computational geometry' and not just 'geometry'? What is specifically computational about arrangements of hyperplanes?\", \"page_2\": \"Missing from the review of previous results about the power networks is all of the work done on threshold units (see the papers of Wolfgang Maass for example, or the seminal of Hajnal et al. proving lower bounds for shallow threshold networks). Unlike the single paper by Hastad et al. cited, none of these require the weights to be non-negative. Moreover, these results are hardly non-realistic, as neural networks with sigmoids can easily simulate thresholds, and under certain assumptions the reverse simulation can be done approximately and reasonably efficiently too.\\n\\nAlso missing from this review is recent work of Montufar et al. and Martens et al. analysing the expressive power of generative models (RBMs).\", \"beginning_of_page_3\": \"I have a hard time following this high-level discussion. I think this doesn't belong in the introduction, as it is too long and convoluted. Instead, I think you should include such discussion as intuition about your formal constructions *as you give them*. The way it is written right now, the discussion tries to be intuitive, precise, and comprehensive, and it doesn't really succeed at being any of these.\", \"page_3\": \"You should formally define what you mean by 'hyperplane' and 'arrangement'. In particular, a hyperplane is the set of points defined by the equation, not the equation itself. And if an arrangement is taken to be a set of hyperplanes (as per the usual definition), then the statement in Prop 6 isn't formal (although its meaning is still obvious). In particular, how does a ball S 'intersect' with a set of hyperplanes? Do you mean that it intersects with the union of the hyperplanes in the arrangement? I know these are nit-picky points, but if you should try to be technically precise.\", \"page_5\": \"You should explain the concept of general position precisely. I don't know what 'generic weights' is supposed to mean, the actual definition has to do with lack of colinearity. You might want to point out that any choice of hyperplanes can be infinitesimally perturbed so that they end up in general position.\", \"page_6\": \"The explanation after the statement of Prop 6 is much clearer. Perhaps you should just prove this (stronger) statement directly, and not the fairly opaque and abstract statement made in Prop 6.\", \"figure_3\": \"Why are there 2 dotted circles instead of just 1?\", \"page_7\": \"What do you mean by an 'enumeration' of a 2-dimensional arrangement?\", \"page_9\": \"How can rho_i^(2)(R_i^(1)) be a 'subset' of an subspace that lives in R^(n1)? rho_i^(2)(R_i^(1)) is going to be a set of vectors living in R^(n0)!\", \"and_i_have_another_question\": \"how is Prop 7 actually used here? Merely to establish the existence of network where there is n/n0 regions that each only turn on for different groups Ii? Isn't it trivial to construct such a thing? i.e. take the input weights to the units in a given group to be all the same, and make sure the square n0xn0 matrix formed by taking the weight vector for each group is full-rank (e.g. the identity matrix)?\\n\\nI suspect the reason this doesn't work is that the dimensions would collapse to 1 since each unit in a group would behave identically. However, one could then perturb the final expanded weight matrix to get a general position matrix, so that the subspace associated with each group wouldn't collapse to dimension 1. Is there anything wrong with this?\", \"page_10\": \"What do you mean when you say that 'this arrangement is repeated once in each region'? What does it mean for an arrangement to be 'repeated in a region'?\\n\\nI feel like the proof becomes mostly a proof-by-diagram as this point. Maybe you should have started off with this kind of diagram and the intuition of 'duplicating regions', explaining how composing piece-wise linear functions can achieve this kind of thing (which is really the critical point that gets glossed over), and then proceeding to show that you could formally construct each 'piece' required to do this. And you should have done the construction starting at the top layer going down.\\n\\nHaving reconstructed the proof in a way that I actually understood it, it seemed that one could also proof that one can (prod_{i=1}^{k-1} n_i)/2^{k-1} * sum_i=0^2 (n_k choose i) regions, which in some cases might be a lot larger than the expression you arrived at. Unlike your Thm 8 does, this version would actually need to use the fact the constructions are 2-dimensional in Prop 7.\", \"page_11\": \"The asymptotic analysis is just very routine and uninteresting computations and should be in the appendix. It breaks the flow of your paper. I would much prefer to see more detailed commentary about the implications of Thm 8.\"}", "{\"review\": \"David, thanks a lot for your comments. We will definitely consider them in the next version of the paper. I have to wonder however, which version of the paper did you look at? Version 2 (which is online since 6th Jan) does not have an equation 13, while the first draft did. My hope is that the version 2, that is online, already\\nanswers most of your questions. Note that we have a different construction that changes much of the flow of the paper as well as \\nthe final result.\", \"i_would_quickly_try_to_summarize_some_specific_answers_to_your_questions\": \"(1) Regarding the scope of the paper. We do not generalize Zaslavsky's Theorem to scenarios that arise in deep models, as you suggest. For a single layer rectifier MLP it turns out that each hidden unit partitions the space using a hyperplane. This means that the number of regions we have is the number of regions you can get from an arrangement of $n$ hyperplanes. The answer to this question is given by Zaslavsky's Theorem. This offers as an upper bound to the maximal number of regions a single hidden layer model has. To make our point (that deep models are more efficient) we only need to show now that there exist deep models that result in more regions. We do so by constructing a specific deep model for which we can compute lower bound to the number of linear regions it generates. Asymptotically this lower bound is much larger than the maximal number of regions a single layer model has. \\n\\n (2) 'region'/'region of linearity'/'input space partition space' means the same thing. It is a connected region of the input space U, such that within this region every hidden unit either stays positive or is equal to 0. Or in other words, within this region the MLP is a linear function with respect to its input.\\n\\n (3) Regarding figure 2, we do use a new construction now that is depicted in more detail. \\n\\n (4) Proposition 8 and 9 have been replaced completely. Proposition 8 describes a lower bound on how many new regions you can get within a layer, given the number of regions you have generated up to that layer. Proposition 9 used proposition 8 to give a lower bound on the total number of regions of a deep model.\\n\\n (5) Regarding the statement about the output activation function. We simply meant to say that it is sufficient to study models with linear output activation function to get a sense for MLPs with sigmoids or softmax output activations. To see this do the following. Consider you have some rectifier MLP with sigmoids at the output layer, and some target function $f$. Let $s$ stand for the sigmoid activation function. To get a sense of how well we can approximate $f$, we can look at how well we can approximate $s^{-1}(f)$ with a rectifer MLP that has a linear output activation function.\"}", "{\"reply\": \"Thank you for your comments Reviewer 67e9. We have carefully considered them and integrated them in the new version of the paper, which is now available on arxiv (v5). In what follows let us answer to some of your concerns.\\n\\n'But there are several problems with the paper. The writing is unclear, and overall the paper feels like a preliminary draft, not ready for prime time.'\\n\\nWe think the paper has improved steadily since the initial submission and we hope that, in the new version, it is clear and up to ICLR quality standards. \\nThe paper offers a new perspective on a hard question and we think that the presented ideas can be useful for addressing a variety of related problems. \\n\\n'The introduction can be tightened up.'\\n\\nWe shortened the Introduction. \\n\\n'A more significant comment concerns the important attempt to give some intuition, at the top of page 3...'\\n\\nIn our attempt to provide a simplest possible example of the mechanism behind our proof, we unfortunately made a mistake. You are right, with a single input unit it is not possible to networks for which distinct units are active at different input intervals in the way that it was claimed in that example. Thank you for pointing this out. We fixed the mistake. Proposition 7 (now Proposition 4) indicates the relevant conditions. \\n\\n'I think the basic intuition is that a higher-level unit can act as an OR of several lower level regions...'\\n\\nExactly, this is the intuition that we were trying to convey in that paragraph. This intuition builds the main mechanism behind our proof of the main theorem. We hope that with the changes made to the manuscript the construction is now clearer. \\n\\n'Also, one would expect that the ability to represent AND operations would also lead to significant expressivity gains. '\\n\\nThe offered proof relies only on the OR operation, but one should be careful about what this means exactly. \\nSpecifically, we do not compute the OR between two values and provide the result as output of the layer. Instead, what this OR operations describes is that some particular output value can be obtained from various inputs input1 OR input2 OR input3, etc. In this context an AND operation does not make sense. \\n\\nWhat we are describing here is a function that is not injective, i.e., which has distinct domain values that are mapped to the same output value. Of course, the injectivity is lifted to the level of domain or input regions rather than individual input values. However you can see how AND becomes impossible to express in these terms. \\n\\n'In addition, it would be very helpful to have some concrete example of a function that can be computed by a deep net of the sort analyzed here...'\\n\\nThank you for the suggestion. We included a description of more intuitive classes of functions computable by rectifier models in Section 5, together with toy examples with 2-dimensional inputs. \\nWhile Theorem focuses more on the asymptotic regime, the new construction given in Section 5 shows that there are classes of functions that can be computed far more efficiently by deep networks than by shallow ones, even if the number of layers of the deep networks is relatively small, say equal to 3 or 4. \\n\\n'I also found the proof of Theorem 8 very hard to understand..'\\n\\nWe completely overworked that proof, putting attention to the consistency of the notation and maintaining the mathematics precise. We extracted parts of the proof into propositions, in order to make the steps clearer.\\n\\n'Finally, I would recommend exploring the relationship between the ideas in this paper and the extensive work in circuit complexity... '\\n\\nThank you. We will look carefully at that literature. There might be some interesting connections.\"}", "{\"reply\": [\"We appreciate the detailed comments of Reviewer 2699. They were very helpful for preparing the present revision of the manuscript. In the following we address all comments of the reviewer and give description of the changes made to the manuscript.\", \"In response to the general comments we\", \"Changed the title of the manuscript to ``On the number of response regions of deep feedforward networks with piecewise linear activations''\", \"Shortened the Introduction (and removed an erroneous example that was given there)\", \"Completely overworked the proof of the former Theorem 8 (now Theorem 1).\", \"Moved the asymptotic analysis to the appendix\", \"Included a new section (Section 5) discussing tighter bounds for deep models.\", \"In the following we address the detailed comments.\", \"``Computational geometry'' refers to the study of algorithms using geometry. Here, using the word ``computational'' is just a matter of taste. Our motivation is that a neural network is a computational system and an algorithm (compute output of unit $i$ for $iin [n]$, sum the outputs of units $iin[n]$, etc.).\", \"We included a reference to Hajnal's work in the Introduction.\", \"We included pointers to the work of Montufar et al. and Martens et al. in the Introduction.\", \"We removed the long discussion from the Introduction and decided, instead, to include an example (Example 1) in the vicinity of the main theorem (Theorem 1).\", \"We included several definitions and worked on making our formulations more precise.\", \"We corrected a missing reference to Zaslavsky's work.\", \"We included the formal definition of ``general position'' and comments on infinitesimal perturbations.\", \"We no longer use the expression ``relative position'' For clarity, in the previous manuscript, the definition was as follows: Two arrangements have the same relative position if they are combinatorially equivalent, or more formally, if there is a bijection of their intersection posets, where the intersection poset of an arrangement is the set of all nonempty intersection of its hyperplanes partially ordered by reverse inclusion.\", \"We reformulated the former Proposition 6 in terms of scaling and shifting, moving technical details to the proof, and avoiding the use of the expression ``relative position''.\", \"We corrected the former Figure 3, which now shows just 1 dotted circle instead of 2.\", \"We explained the notion of ``essentialization'' in more detail. Proposition 4 describes the combinatorics of $n$-dimensional arrangements with $2$-dimensional essentialization; that is, arrangements of hyperplanes whose intersections with the span of their normal vectors build a $2$-dimensional arrangement (on the span of the normal vectors).\", \"We corrected '${0,...,n}, a < b' -> '{0,...,n} s.t. a < b$'.\", \"We improved our formulations, especially about ``independent groups of units'' and ``enumeration'' of the hyperplanes in an arrangement.\", \"We made significant efforts in clarifying how the construction works, bottom to top.\", \"We improved the formulations ``we find'', ``groups'', trying to make the arguments more formal and clearer.\", \"The reviewer asked how Proposition 7 was used in the proof of the theorem. Using the same activation weights for a collection of units would cause them to behave identically. The entire collection of units would have an output of dimension at most one, which would not be useful for our proof. Perturbing the weights to produce a full dimensional matrix would work. A high level proof could be formulated in this way. We found it important to give an explicit choice of weights for which certain well defined properties hold, instead of relying only on high level arguments, in particular, because this allows us to verify the accuracy of our intuitions.\", \"In fact, our construction is stable, in the sense that small perturbations of the specified weights cause only small perturbations of the computed function. The resulting perturbed function has at least as many linear regions as the original one.\", \"The word `decompose' was meant in the opposite way that `compose' is used for compositions of functions, $fcirc g$. Thanks for the comment, we tried to use more precise expressions.\", \"The reviewer asked\", \"``Page 9: What does it mean for a linear map to be 'with Ii-coordinates equal\", \"to...'? The 'Ii-coordinates' are the inputs to this map? The outputs? ''\", \"We tried to make this more precise in the revision. For clarity, the terminology is the standard one:\", \"A ``coordinate'' of a map $f : R^n\\to R^m; (x_1,ldots, x_n) mapsto (f_1(x_1,ldots, x_n), ldots, f_m(x_1,dots, x_n))$ is any of the functions $f_i: R^n \\to R; (x_1,ldots, x_n)mapsto f_i(x_1,ldots, x_n)$ for $i=1,ldots, m$. Given a subset $I$ of ${1,ldots, m}$, the $I$-coordinates of the map $f$ are the functions $f_i$ with $iin I$.\", \"For example, if $I={i_1,ldots, i_{|I|} }subseteq {1,ldots, m}$, we can consider the map defined by the $I$-coordinates of $f$, which is the map $f_I: R^n\\to R^{|I|}; (x_1,ldots, x_n) mapsto (f_{i_1}(x_1,ldots, x_n) , ldots, f_{i_{|I|}}(x_1,ldots, x_n) )$.\", \"The reviewer asked ``What is $R_i^{(1)}$? It is never defined...''.\", \"We worked on better explaining the notation and using it uniformly.\", \"The reviewer wrote ``Page 9: You say that something 'passed through' $\", \"ho^2$ is the 'input' to the''...\", \"We improved the terminology.\", \"Also ``Page 9: How can $\", \"ho_i^(2)(R_i^{(1)})$ be a 'subset' of an subspace that lives...''\", \"We overworked these parts as well.\", \"The reviewer suggested a new construction for proving statements about deep models. We do not argue that our construction or our analysis yields the maximal number of regions of linearity. It merely demonstrates that deep models are exponentially more efficient than shallow models. In the revision we included a new construction of weights of deep rectifier networks (in Section 5), which shows tighter bounds for certain choices of layer widths. Other constructions exploiting higher dimensional versions of the former Proposition 7 are worth studying in the future, in order to arrive at yet tighter bounds.\", \"We moved the asymptotic analysis to the appendix. In the Discussion we included comments about the number of linear regions computable per parameter.\"]}", "{\"review\": \"We have posted a new revision (v5) of out manuscript. In this revision we address the reviewers' comments.\", \"the_most_important_changes_are\": [\"We reduced the length of the introduction and moved some of the detailed descriptions or intuitions near the relevant propositions or theorems.\", \"We fixed some shortcomings of an example that was given in the Introduction of the previous version of the manuscript.\", \"We added missing formal definitions and worked on making our notation more consistent and rigorous.\", \"We overworked the proof of the former Theorem 8 (now Theorem 1). We included new diagrams illustrating the steps of the proof. We also included an example (Example 1) illustrating how the components of the proof are put together in the proof.\", \"We added a new section (Section 5) describing a construction of weights for which deep models exhibit more linear regions than shallow ones, even for a small number of hidden layers. In this section we also illustrate specific functions that can be represented with this choice of weights.\", \"We formulated bounds in terms of the number of regions per parameter computable by rectifier networks. These bounds behave similarly to the bounds expressed in terms of the number of units, showing that deep models are exponentially more efficient than shallow models.\"]}" ] }
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
submitted, no decision
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
ICLR.cc/2014/conference
2014
{ "note_id": [ "Gg_aFHZkvdGNC", "01dx0NWMEZGjr", "P-iz-xpWztV_s", "l6rclPZecziqO", "o1QJezhmJPem2", "bbr_bzvcfMSDd", "88wtvqkZBJaOo", "TTfOb3K5AclE7", "oV1qV9LKY43KA", "mOSRjNNtRhj91", "IQiNInntNzvtF", "ssEosd49J-s3f", "BB-qwYPO53cZd", "ZpJXvOG3-Cv6U", "tmV9T33Eo0myh", "PPfbPOaNgVPWE", "0Mj2MHxs2CA36", "IGP-jflb5FImc", "55fTgHc4IMg4N", "gP7ACWWzucCnT", "U_SZ1ftvTIxvg", "Wsm0zCOF1isf5", "22GAEiwLF02Cg", "Np1RNxW_L5pw2", "-1_6TtCJ3iKOz" ], "note_type": [ "review", "review", "review", "review", "comment", "review", "comment", "review", "comment", "review", "review", "review", "comment", "review", "review", "review", "review", "comment", "review", "review", "review", "comment", "review", "comment", "review" ], "note_created": [ 1388841360000, 1391148960000, 1391149020000, 1392085260000, 1389824640000, 1391149020000, 1392782820000, 1391148960000, 1392783300000, 1391858040000, 1391149020000, 1391149020000, 1391370300000, 1387854420000, 1391159880000, 1391728860000, 1391148960000, 1392783240000, 1391148960000, 1391149020000, 1391148960000, 1390029480000, 1391149020000, 1389051900000, 1391149020000 ], "note_signatures": [ [ "David Krueger" ], [ "Phil Bachman" ], [ "Phil Bachman" ], [ "anonymous reviewer d08c" ], [ "David Krueger" ], [ "Phil Bachman" ], [ "Alireza Makhzani" ], [ "Phil Bachman" ], [ "Alireza Makhzani" ], [ "anonymous reviewer c32b" ], [ "Phil Bachman" ], [ "Phil Bachman" ], [ "Alireza Makhzani" ], [ "Markus Thom" ], [ "Phil Bachman" ], [ "anonymous reviewer b245" ], [ "Phil Bachman" ], [ "Alireza Makhzani" ], [ "Phil Bachman" ], [ "Phil Bachman" ], [ "Phil Bachman" ], [ "Alireza Makhzani" ], [ "Phil Bachman" ], [ "Alireza Makhzani" ], [ "Phil Bachman" ] ], "structured_content_str": [ "{\"review\": \"I have not read the paper that carefully. But the idea of the k-sparse autoencoder seems very similar to the Orthogonal Matching Pursuit (OMP-k) training and encoding used in Coates and Ng (http://www.stanford.edu/~acoates/papers/coatesng_icml_2011.pdf).\\n\\nThe difference, it seems to me, is that OMP-k allows less than k units to be active, and also the scheduling idea (4.2.1), and the alpha multiplier for the encoding stage.\\n\\nIs there something else I am missing that distinguishes your approach? \\n\\nOtherwise, I would like to see comparisons to OMP-k, and to a simple threshold encoding approach as in Coates and Ng.\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"title\": \"review of k-Sparse Autoencoders\", \"review\": \"The authors propose an auto encoder with linear encoder and decoder, but with sparsity that keeps only k elements in the hidden layer nonzero. They show that it works as well or better then more complicated methods.\", \"novelty\": \"Simple but works\", \"quality\": \"Good\", \"details\": [\"The paper introduces a very simple idea which I am sure many people not only thought of but implemented, including me. However the main point here is that the authors actually made it work well and made a connection to a sparse coding algorithm. One of tricks of making it work seems to be to start with a large number of allowed nonzero elements and then decrease it, otherwise, many filters would not ever be used.\", \"Is there a mistake in the algorithm box as presented x = Wz+b'. Shouldn't the z be replaced by something like z_Gamma where the latter is obtained from z by setting elements that are not in the group of k largest to zero? Because that's what the description in the rest of the paper implies, for example in 2.2.\", \"Table - it would be good to explain that the net is in Table 3's caption.\"]}", "{\"reply\": \"I agree on all the points made about Theorem 3.1.\\n\\n'supp_k(z) = supp_k(W^T*x)' would be clearer as well as more succinct.\\n\\nI also encourage you to put a little box at the end of the proof.\\n\\nAnd If I understand it correctly, you can use a weaker condition, namely:\\n\\nk*mu < z_k/z_1 (note strict inequality)\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"reply\": \"We very much appreciate your constructive feedbacks.\\n\\n1. 'the method is a bit flawed because it does not control sparsity across samples (yielding to possibly many dead units). It would be very helpful to add experiments with a few different values of code dimensionality. For instance on MNIST, it would be interesting to try: 1000, 2000 and 5000.'\\n\\nThank you for raising this concern. All the reported experiments on NORB has been done with 4000 hidden units. We have also done experiments with 4000 hidden units on MNIST and with a proper scheduling of k, we were able to train almost all the filters of the autoencoder and obtain classification result of 1.15% before fine-tuning and 0.99% after fine-tuning. In the case of MNIST, when we start off with a large sparsity level, almost all the filters get trained in the first 50 epochs. As we decrease the sparsity level, the filters start evolving to more global filters and the length of digit strokes start increasing. We didn't report the results with 4000 hidden units on MNIST so that we could have a fair comparison with other works that use 1000 hidden units. Based on your feedback, we will include this result and the details of the experiment in the final manuscript.\\n\\n2. 'The paper is fairly incremental in its novelty. There are several other papers that used similar ideas'\\n\\nThank you for raising concerns about the related works. We would like to point out that there are important differences between our paper and the works you mentioned.\\n\\nWe compared our method to the marginal regression method in response to Phil Bachman's comment (see above). In short, While we are addressing the conventional sparse coding problem with a Euclidean cost function, Krishnakumar's paper defines a different cost function using a non-parametric kernel function applied on data. We derived our operator from iterative hard thresholding which is completely different from marginal regression and behave differently. Another difference is that marginal regression uses an L1 penalty on the absolute value of the least square coefficients to promote sparsity. We have tried using the L1 norm instead of the L0 norm in our algorithm and we were not able to train the model. So, using the L0 norm makes a significant difference. We use our operator to regularize deep neural nets and do supervised learning while they use marginal regression in an unsupervised fashion and then train a SVM on top of that for classification. The analysis we provide for algorithm is quite different and in our view complementary to the results in the paper that you mentioned. Based on your feedback, we will include this comparison in our paper.\\n\\nAlthough there are interesting connections between 'Compete to Compute' paper and our paper, The focus and details of the two works are rather different. The focus of our paper is sparse coding while the 'Compete to Compute' paper splits the hidden units to the several groups of two hidden units, and pick the largest hidden unit within each group. So exactly half of the hidden units are always active at any time (similar to 50% dropout) and there is no sparsity in the hidden representation. Our operator is also quite different as it picks the k largest hidden units among all hidden units while they pick one single hard winner within each group.\\n\\n3. 'lack of comparison'\\n\\nWe have compared our method to several methods including dropout, denoising autoencoder, DBN, DBM and third-order RBM.\\nRegarding the LISTA and PSD methods, we have cited them and made a brief comparison in the introduction of the paper. Our error rate on MNIST before fine-tuning is 1.38% while LISTA's best error rate is 2.15% and PSD's is 4.5% (1000 samples/class). Based on your feedback, we will also add these numerical comparisons in the final manuscript.\\nAlso, In the Coates and Ng's paper, the thresholding operator is only used at test time and training is performed using other algorithms, such as OMP-k, which are very slow. Another difference is that they use a fixed and pre-defined soft thresholding operator and do not have control over the sparsity level, while we are using a hard thresholding operator in which the threshold is adaptive and is equal to the k-th largest element of the input.\\n\\nThank you again for bringing up these issues, since it helps us better place our work in context.\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"reply\": \"Thank you very much for your helpful comments. It will be straightforward for us to address the ambiguities you raised about the content of the algorithm box, and produce a version that is clearer in the final manuscript.\"}", "{\"title\": \"review of k-Sparse Autoencoders\", \"review\": [\"Brief summary of the paper:\"], \"the_paper_investigates_a_very_simple_heuristic_technique_for_a_simple_autoencoder_to_learn_a_sparse_encoding\": [\"the nonlinearity consists in only retaining the top-k linear activations among the hidden units and setting the others to zero. With the thus trained k-sparse autoencoers, the authors were able to outperform (on MNIST and NORB) features learned with denoising autoencoders, RBMs or dropout, as well as deep networks pre-trained with those techniques and fine-tuned.\", \"Assessment:\", \"This is an interesting investigation of a very simple approach to obtaining sparse representations that empirically seem to perform very well. It does have a number of weaknesses:\", \"I find it misleading to call this a *linear model* as in the abstract. A piecewise linear function is *not* a linear function. The model does use a non-linear sparsification operation.\", \"For such a simple approach, I find the actual description of the algorithm (especially in the algorithm box) disappointingly fuzzy, unclear, confusing and probably wrong:\", \"What is the exact objective being optimized? Is is always squared reconstruction error? It is not written in the box. Also do you really reconstruct x^ from a z that has not been sparsified (as is written in step 1)?? This is contrary to my understanding from reading the rest of the paper. I believe it would be much clearer to introduce an explicit sparsification step before the reconstruction. Similarly, with your definition of supp, it looks like the result of your sparse encoding h is a *set of indices* rather than a sparse vector. Is this intended? Wouldn't it be clearer to define an operation that returns a sparse vector rather than a set of indices? The algorithm box should be rewritten more formally, removing any ambiguity.\", \"Section 3.3: While I find the discussion on the importance of decoherence interesting, I do not believe you can formally draw your conclusion from it, since you do not have the strict equality x = Wz (perfect reconstruction) that your theorem depends on but only an approximate reconstruction. So I would mitigate the final claims.\", \"I wonder how thoroughly you have explored the hyper-parameter space for the other pre-training algorithms you compare yourself with, especially those that are expected to influence sparsity or control capacity somehow, as e.g. the noise level for denoising autoencoders and dropout?? Did you but try a single a-priori chosen value? If so the comparisons might be a little unfair since you hyper-optimized your alpha on the validation set.\", \"Pros and Cons:\"], \"pros\": [\"interesting approach due to its simplicity\", \"very good empirical classification performance.\"], \"cons\": [\"confusing description of the algorithm (in algorithm box);\", \"possibly insufficient exploration of hyper-parameters of competing algorithms (relative to the amount of tweakings of the proposed approach).\"]}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"reply\": \"Thank you Phil for referring us to Krishnakumar's paper. We read this paper and although there are interesting connections between the two works, there are important differences as well:\\n\\n1) While we are addressing the conventional sparse coding problem with a Euclidean cost function, Krishnakumar's paper defines a different cost function using a non-parametric kernel function applied on data. As a result, their hidden representation is modeling the neighborhood of training examples rather than reconstructing the individual samples.\\n\\n2) Although iterative hard thresholding (that we use) and marginal regression are both alternatives of LASSO, they are quite different algorithms and may behave quite differently. Iterative hard thresholding (IHT) is an iterative procedure for sparse recovery that uses L0 projection and refines the estimated support set at each iteration. If we use only the first iteration of IHT to learn the dictionary, we obtain our K-sparse autoencoder. But we can use more iterations (at the training or test time) and get better results at the price of more computational complexity. See http://www.see.ed.ac.uk/~tblumens/papers/BDIHT.pdf for more details about IHT. Marginal regression, however, is a different algorithm that uses an L1 penalty on the absolute value of the least square coefficients to promote sparsity. We have tried using an L1 norm instead of the L0 norm in our algorithm and we were not able to train the model. So, using the L0 norm makes a significant difference. We have also done experiments using the absolute value of the hidden representation and observed that taking absolute value hurts the performance in our setting. Based on your feedback, perhaps these results should be included?\\n\\n3) We have been able to regularize deep neural nets using our method and obtain better results than dropout and the denoising autoencoder on both MNIST and NORB. Using the neural nets gives us the advantage of fine-tuning a 'supervised learning task' with our thresholding operator. But Krishnakumar's paper obtains features modeling the neighborhood in an 'unsupervised fashion' and then uses an SVM on top of that for classification. So our algorithm could also be viewed as a regularization method for deep nets using sparsity.\\n\\n4) The analysis we provide for algorithm is quite different and in our view complementary to the results in the papers that you mentioned.\\n\\nThank you for bringing this other work to our attention, since it helps us better place our work in context, and also thanks for mentioning the 'marginalized dropout' idea, as we think it is definitely worth trying as an addition to our work.\"}", "{\"review\": \"This is an interesting paper I would like to comment on and ask the authors some questions. The paper studies auto-encoders where the internal representation is approximated online by a vector whose number of non-vanishing coordinates is restricted (in other words, the internal representation is projected onto the set of all vectors with a certain L0 pseudo-norm). The proposed model, 'k-sparse autoencoders', is put in context with Arian Maleki's 'iterative thresholding with inversion' algorithm for inference of sparse code words, and a criterion is given to identify the case when one iteration is enough for perfect inference. Experiments on MNIST and small NORB show that superior classification performance (wrt. RBMs, dropout, denoising auto-encoders) can be achieved when using the proposed model to generate features and processing them with logistic regression, and that the classification performance is competitive when all the models were additionally fine-tuned. This is a cool result, since the only non-linearity used for the features is the projection.\\n\\nWe have done something similar, see Section 3 of the paper available at http://jmlr.org/papers/v14/thom13a.html, by using projections onto sets on which certain sparseness measures (including the L0 pseudo-norm) attain a constant value as neural transfer function in a hybrid of an auto-encoder and an MLP. Inference of the internal representation can be understood here as carrying out the first iteration of a projected Landweber algorithm. Perhaps the authors would like to discuss the relationship between both approaches?\\n\\nThe last sentence in the discussion after the proof of Theorem 3.1 is a bit puzzling. The theorem shows that if mu is small, then the supports of z and W^T*x are identical. The aforementioned sentence says that these supports are identical, hence mu must be small. I believe this is the converse of the theorem's statement and has not been proven, since there may be reasons other than mu being small.\\n\\nThe description of the k schedule in Section 4.2.1 is ambiguous. Does this mean that when we have 100 epochs, say, that k follows a linear function for epochs 1 thru 50, and then remains at the minimum level for epochs 51 to 100, or does it mean that in each epoch for the first halve of the presented samples k is adjusted and stays at the minimum for the remaining samples of the epoch, and then the schedule starts all over again in the next epoch? There are still some dead hidden units in the figures on page 6, even for k = 70 on MNIST. Would it help to increase the initial k value in the schedule, or maybe add some small random numbers to z (with some annealed variance) after setting the small entries to zero, such that backprop adjusts all the hidden units?\", \"just_a_few_things_i_noticed_while_reading_through_the_manuscript\": [\"The formulation of the claim of Theorem 3.1 could be altered to be more succinct, e.g. 'supp_k(z) = supp_k(W^T*x)'. In the proof, i should be from {1, ..., k}, since i = 0 doesn't seem to make sense here.\", \"Typo on page 3, left column: 'tarining'\"]}", "{\"review\": \"I apologize for the clutter. The web interface was not responding on my end, though it apparently processed most of my requests on the server side of things. If anyone has moderator privileges, I would appreciate it if all but one of my earlier comments could be removed.\"}", "{\"title\": \"review of k-Sparse Autoencoders\", \"review\": \"In this paper, the authors propose a new sparse autoencoder. At training time, the input vector is projected onto a set of filters to produce code values. These code values are sorted and only the top k values are retained, the rest is set to 0 to achieve an exact k-sparse code. Then, the code is used to reconstruct the input by multiplying this code by the transpose of the encoding matrix. The parameters are learned via backpropagation of the squared reconstruction error. The authors relate this algorithm to sparse coding and demonstrate its effectiveness in terms of classification accuracy on the MNIST and NORB datasets.\\n\\nThe paper is fairly incremental in its novelty. There are several other papers that used similar ideas. Examples of the most recent ones:\\n- R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, J. Schmidhuber. Compete to Compute. In Proc. Neural Information Processing Systems (NIPS) 2013, Lake Tahoe.\\n- Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations K Balasubramanian, K Yu, G Lebanon\\nProceedings of the 30th International Conference on Machine Learning 2013\\n\\nThe theoretical analysis is good but straightforward.\\nWhat worries me the most is that a very important detail of the method (how to prevent 'dead' units) is only slightly mentioned. The problem is that the algorithm guarantees to find k-sparse codes for every input but not to use different codes for different inputs. As the code becomes more and more overcomplete (and filters more and more correlated), there will be more and more units that are not used making the algorithm rather inefficient.\\nThe authors propose to have a schedule on 'k' but that seems rather hacky to me. What happens if the authors use twice as many codes? what happens if they use 4 times as many codes? My guess is that it will break down easily. \\nMy intuition is that this is a very simple and effective method when the code has about the same dimensionality of the input but it is less effective in overcomplete settings. This is an issue that can be rather important in practical applications and that should be discussed and better addressed.\", \"pros\": [\"simplicity\", \"clearly written paper\"], \"cons\": [\"lack of novelty (see comments above)\", \"lack of comparison\", \"I would add a comparison to A. Coates method and to K. Gregor's LISTA (or K. Kavukcuoglu's PSD) (software for these methods is publicly available). These are the most direct competitors of the proposed method because they also try to compute a good and fast approximation to sparse codes.\", \"the method is a bit flawed because it does not control sparsity across samples (yielding to possibly many dead units). It would be very helpful to add experiments with a few different values of code dimensionality. For instance on MNIST, it would be interesting to try: 1000, 2000 and 5000.\", \"Overall, this is a nicely written paper proposing a simple method that seems to work fairly well. I have concerns about the novelty of this method and its robustness to highly overcomplete settings.\"]}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"reply\": \"Thank you very much for your feedbacks.\\n\\n-We will clarify all the ambiguities that you mentioned in the abstract and the algorithm box. This should improve the clarity of the manuscript.\\n-Regarding your question about our reimplementation of the dropout and denoising autoencoder, we did the hyperparameter search for the dropout rate and the noise level. Further, we found that the reported results are consistent with those reported in the original papers.\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"reply\": \"Thank you Markus and David for your constructive feedbacks.\\n\\nWe read your paper and found its results in line with that of ours. In your paper, an improved and differentiable sparsity enforcing projection with respect to the Hoyer's sparseness measure has been introduced. This projection is used in a supervised online autoencoder whose cost function has an alpha parameter that controls the trade-off between unsupervised learning and supervised learning. The algorithm has been tested on MNIST and achieved good classification rate. \\nThis paper is similar to our paper in that our hard thresholding operator could be viewed as an L0 projection and we are also using unsupervised learning to pre-train our discriminative neural net. It is true that by using a more complicated thresholding operator in the sparse recovery stage (as in your paper), we can obtain a better performance on datasets such as MNIST. For example, in Donoho & Maleki's 'approximate message passing' paper, it has been shown that a variant of iterative hard thresholding algorithm that uses a complex thresholding operator derived from message passing can beat even convex relaxation methods in sparse recovery in both performance and complexity. However, as we have discussed in the paper, the main motivation of this work is to propose a fast sparse coding algorithm on GPU that could be applied on larger datasets. We have shown that only the first few iterations of IHT combined with a dictionary update stage is enough to get close to the state of the art results. IHT only requires matrix multiplication in the sparse coding stage which makes it very fast on GPUs. We have discussed how the iterations of IHT algorithm could be viewed in the context of our proposed k-sparse autoencoder and how we can use it to pre-train deep neural nets.\\n\\nRegarding the incoherency, Theorem 3.1 establishes a connection between the incoherency of the dictionary and the chances of finding the true support set with the encoder part of the k-sparse autoencoders. We experimentally observe that k-sparse autoencoders converge to a local minimum. At this local minimum, the autoencoder has learnt a sparse code z, that satisfies x = Wz. Also the support set of this sparse code can be estimated using supp(W.T x). Since supp(W.T x) succeeds in finding the support set, according to the Theorem 3.1, the atoms of the dictionary should be well separated from each other and the learnt dictionary must be sufficiently incoherent.\\nYes, if we were only considering just one single training example, as you mentioned there could be other reasons that the support set could be recovered while the dictionary is coherent. But the only way that the k-sparse autoencoder can recover the true support set for 'all training examples' is that the dictionary be sufficiently incoherent.\\nTo measure the incoherency of the dictionary, we obtained the following result (see Deterministic Compressed Sensing by Jafarpour & Calderbank). We have experimentally showed that the k-sparse autoencoder learns a dictionary for which we can actually solve sparse recovery problems for other sparse signals and not just the training points. We first learned a dictionary W of size 784*1000 on MNIST using the k-sparse autoencoder with K = 20. We then picked K elements of z uniformly at random and set them to be one and set the rest of the elements to zero. Then we computed x = Wz and tried to reconstruct the support set of z from x using the encoder of the k-sparse autoencoder. We found that the trained dictionary was able to recover the support set of the arbitrary sparse signals perfectly. This is only possible when the dictionary is sufficiently incoherent and the atoms (features) are well separated from each other.\\n\\nRegarding the scheduling, It means when we have 100 epochs, k follows a linear function for epochs 1 to 50, and then remains at the minimum level for epochs 51 to 100. In our experiments, increasing k always helped to avoid dead hidden units. Another option is to add a KL penalty to the cost function of the autoencoder as in Ng's sparse autoencoder paper. This KL penalty encourages each hidden unit to be active a small number of times across the training set and avoids the problem of dead hidden units.\\n\\nThanks Markus and David for pointing out the typos. We will correct them in the final manuscript.\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}", "{\"reply\": \"Thank you David for raising this concern.\\nOMP combined with a dictionary update stage (as in Coates and Ng) is the conventional way of doing sparse coding and is discussed in the introduction of our paper. The problem with OMP is that it is very slow in practical situations. It uses k iterations and at each iteration, it needs to project a residual error onto the orthogonal complement of the subspace created by the span of the dictionary elements at that iteration. This is a very costly matrix operation as it requires k matrix inversions whenever we visit any training example. It works on small datasets such as MNIST but we were not able to get it working properly on larger datasets such as NORB, as it was too slow.\\n\\nIndeed the main motivation of proposing the k-sparse autoencoder is to address sparse coding for larger problem sizes, e.g., NORB, where conventional sparse coding approaches such as OMP are not practical. We have shown in the paper that the k-sparse autoencoder is an approximation of iterative hard thresholding (IHT). IHT is much faster than OMP, as it only needs a matrix multiplication in the sparse coding stage that can be efficiently implemented on a GPU. Also we have approximated the dictionary update stage with a single gradient descent step which makes the algorithm very fast. We have discussed in the paper that even with a very naive approximation of IHT, we achieve a fast sparse coding algorithm on GPU that performs as good as the state of the art. By tuning the number of iterations of the IHT algorithm, we can learn better dictionaries and trade off classification performance with computation time. Therefore, although both the k-sparse autoencoder and OMP-k enforce k-sparsity in the hidden representation, they are inherently different in both the dictionary learning and encoding stages.\\n\\nOur thresholding operator is also different from that of Coates and Ng's. The main difference is that we directly use the operator in both the training and test stages to gain speed-ups, while in the Coates and Ng's, the thresholding operator is only used at test time and training is performed using other algorithms, such as OMP-k, which are very slow. Another difference is that they use a fixed and pre-defined soft thresholding operator and do not have control over the sparsity level, while we are using a hard thresholding operator in which the threshold is adaptive and is equal to the k-th largest element of the input.\"}", "{\"review\": \"This paper should be citing: 'Smooth Sparse Coding via Marginal Regression\\nfor Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach.\\n\\nThis general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting.\\n\\nIt might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product.\", \"paper_1\": \"http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf\", \"paper_2\": \"http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf\"}" ] }
__Jk_HAdtfK5W
Learning to encode motion using spatio-temporal synchrony
[ "Kishore Reddy Konda", "Roland Memisevic", "Vincent Michalski" ]
We consider the task of learning to extract motion from videos. To this end, we show that the detection of spatial transformations can be viewed as the detection of synchrony between the image sequence and a sequence of features undergoing the motion we wish to detect. We show that learning about synchrony is possible using very fast, local learning rules, by introducing multiplicative 'gating' interactions between hidden units across frames. This makes it possible to achieve competitive performance in a wide variety of motion estimation tasks, using a small fraction of the time required to learn features, and to outperform hand-crafted spatio-temporal features by a large margin. We also show how learning about synchrony can be viewed as performing greedy parameter estimation in the well-known motion energy model.
[ "synchrony", "motion", "features", "detection", "possible", "task", "videos", "end", "spatial transformations", "image sequence" ]
submitted, no decision
https://openreview.net/pdf?id=__Jk_HAdtfK5W
https://openreview.net/forum?id=__Jk_HAdtfK5W
ICLR.cc/2014/conference
2014
{ "note_id": [ "6m_xmeudQa47M", "55DTZ7VOi9sAR", "G3BpoZVjqn3jE", "8W5tCE6_t08Gz", "WUNsa6OJq07iF", "nOAHIb1E0y2d-", "EEWiEwC6o3TY4" ], "note_type": [ "comment", "review", "review", "comment", "review", "review", "comment" ], "note_created": [ 1392129720000, 1392130020000, 1390089480000, 1392129600000, 1391111880000, 1391867580000, 1392129120000 ], "note_signatures": [ [ "kishore reddy" ], [ "kishore reddy" ], [ "anonymous reviewer 21fc" ], [ "kishore reddy" ], [ "anonymous reviewer 951b" ], [ "anonymous reviewer 4272" ], [ "kishore reddy" ] ], "structured_content_str": [ "{\"reply\": [\"The formula (20) looks remarkably similar the paper you reference (and that beats your performance) - sec 3.1 of 'Learning hierarchical spatio-temporal features for action recognition with independent subspace analysis'. The only difference is mainly that you use sigmoid nonlinearity and they use square root.\"], \"there_is_an_additional_difference\": \"Inside the sigmoid there is a square, whereas it is a sum over squares in Le et al. In other words, for them, learning of features is done in the presence of a pooling layer, whereas here, learning is done greedily, layer-by-layer. Running the code published by Le et al., incidentally, yields the same performance as our approach. Our method is significantly faster because it is a type of Kmeans.\\n\\n- 4.2 - which auto encoder you are comparing to - there is a large number of them. In particular what if you just used k-means on concatenated frames? That would also detect 'coincidence' between frames. It would also train extremely quickly (and then use smoother version during inference as in Coates et al). \\n\\nWe now report the K-means result on concatenated frames in the newest version. It shows a significantly lower performance. As for the autoencoder, it was a contractive autoencoder, also with detailed result now in the newest version.\"}", "{\"review\": \"We thank all the reviewers for their valuable comments. We uploaded a new version of the paper with some of the suggested modifications.\"}", "{\"title\": \"review of Learning to encode motion using spatio-temporal synchrony\", \"review\": \"One might think that everything has been said about motion estimation and motion energies, though refreshing ideas are always welcome even in subjects with thousands (ten of thousands?) papers.\\n\\nThe paper overall presentation and discussion are very clear and friendly.\\n\\nWhen discussing multiplicative (2.3), isn't this saying we should be working with additive but in the log space? log space is very frequently used in image analysis.\\n\\nThe locality property is very interesting and as the authors claim, very powerful for computation. I believe is worth investigating for other applications and models.\\n\\nNot all the mentioned results are the state-of-the-art for the used datasets, but this is not critical.\\n\\nDoes it make sense to report results for optical flow standard sets?\\n\\nWhile I don't believe the paper is revolutionary, it is nice to read and has some interesting insights.\"}", "{\"reply\": \"- I would explain the resulting model as simply 'squared spatio-temporal Gabor filters followed by k-means pooling.'\\n\\nWe also show that, unlike for still images, one does not get Gabor features, unless squaring activations are used in the hidden units during learning (which is equivalent to using linear coefficients for computing reconstructions during learning). This also explains why some variants of ICA and energy models (like ISA) can work on videos, but standard autoencoders or K-means do not. \\n\\n- Olshausen 2003, Learning Sparse, Overcomplete Representations of Time-Varying Natural Images \\n\\nWe included this reference in the newest version. We also extended the introduction to discuss this and related papers (3rd and following paragraph in the introduction). \\n\\n- confused about some details on the model architecture used for SAE and SK-means\\n\\nWe rewrote the paper to make this clearer. SAE is now discussed in a separate appendix, and SKmeans in the main text. \\n\\n-You argue for even-symmetric non-linearities, but couldn't a rectified linear unit also produce the desired effect? ; however it would not associate the contrast reversal, which may be the better inductive bias. \\n\\nWe agree. \\n\\n-In the current form I remain uncertain what the benefit is of the proposed approach over the motion energy approach or ISA. K-means does pool over more than 2 features, but this is to be expected given the architecture. Can you show that 2-d subspaces are sub-optimal in terms of performance? You do show that the covAE underperforms, why? Is the Le et al. 2011 pipeline giving the major benefit? \\n\\nWe did find that the covAE underperforms also in a simple motion direction classification task. We believe this is due to the learning criterion, which forces the model to learn invariances that may be good for reconstruction but not necessarily good for the subsequent classification task. The convolutional pipeline provides around 3% accuracy over a simple bag-of-words pipeline. In case of covAE it does not provide such a benefit. \\n\\n-In my opinion, the most novel contribution of the paper is the 'synchrony k-means' algorithm. As far as I am aware this is a nice extension of the fast convergence results shown by Coates et al.. This general framework appears promising for the related problems of 'relating images.' Therefore it could be the focus of the paper, with the SAE algorithm used merely as a control or motivation. \\n\\nWe agree, see previous comment. \\n\\n-I am also skeptical about performance measurements of activity recognition as proxy for motion encoding. These datasets likely suffer from various confounds not related to motion encoding. Maybe a simpler test would be beneficial, scientifically?\\n\\nWe agree that activity recognition, and a fixed pipeline to plug in learned features, may not be a perfect proxy for the quality of a motion encoding, but recognition performance is probably correlated with it. And it does nicely demonstrate possible practical benefits of this work, because our features learn much faster than existing models, and speed is increasingly important due to the sheer size of video datasets.\"}", "{\"title\": \"review of Learning to encode motion using spatio-temporal synchrony\", \"review\": \"The paper introduces a variation of common mathematical forms for encoding motion. The basic approach is to encode first-order motion information through a multiplication of two filter outputs. This approach is closely related to the motion-energy model and the cross-correlation model.\\n\\nThere is a lot of math leading up to a rather simple transform (after learning). I would explain the resulting model as simply 'squared spatio-temporal Gabor filters followed by k-means pooling.' The squared outputs are the components of the energy-based model (quadrature pair squared spatio-temporal Gabor filters are added in the energy based model) and k-means is used to pool first-order motion selective responses.\\n\\nThe more general problem of motion encoding needs to address the goal of motion selectivity and form or pattern invariance/tolerance. As the authors point out, their squared outputs do not solve the motion encoding problem and the following pooling layer is intended to provide the solution. The simplest next step would be to combine (add) two outputs to increase form invariance (this is the motion energy model). Slightly more complex would be to group multiple squared outputs (these are the ISA models applied to spatio-temporal input). The authors propose to use k-means for the pooling operation.\\n\\nThe results are validated on common action recognition computer vision databases. I do not find the results surprising. Given the very close similarities of the proposed algorithm to the Le et al. ISA algorithm it is not at all surprising that the results are nearly identical. The training time improvements are also not surprising given the results from Coates et al. 2011.\", \"this_paper_should_probably_be_cited_in_regards_to_the_learned_filters\": \"Olshausen 2003, Learning Sparse, Overcomplete Representations of Time-Varying Natural Images\\nThere are other results in the ICA community that should be cited, as they give similar results.\\n\\nI am confused about some details on the model architecture used for SAE and SK-means. The beginning of section 4 appears to only describe the SAE architecture. What about the Sk-means architecture? and how does the k-means in section 3.4 relate to the models evaluated in section 4? My apologies if this is discussed somewhere in the text, I just can't find it in the places I expect.\\n\\nHere are some suggestions/comments:\\nThe exposition is useful for bringing together many of motion models in the literature.\\n\\nYou argue for even-symmetric non-linearities, but couldn't a rectified linear unit also produce the desired effect? ; however it would not associate the contrast reversal, which may be the better inductive bias.\\n\\nIn the current form I remain uncertain what the benefit is of the proposed approach over the motion energy approach or ISA. K-means does pool over more than 2 features, but this is to be expected given the architecture. Can you show that 2-d subspaces are sub-optimal in terms of performance? You do show that the covAE underperforms, why? Is the Le et al. 2011 pipeline giving the major benefit?\\n\\nIn my opinion, the most novel contribution of the paper is the 'synchrony k-means' algorithm. As far as I am aware this is a nice extension of the fast convergence results shown by Coates et al.. This general framework appears promising for the related problems of 'relating images.' Therefore it could be the focus of the paper, with the SAE algorithm used merely as a control or motivation.\\n\\nI am also skeptical about performance measurements of activity recognition as proxy for motion encoding. These datasets likely suffer from various confounds not related to motion encoding. Maybe a simpler test would be beneficial, scientifically?\"}", "{\"title\": \"review of Learning to encode motion using spatio-temporal synchrony\", \"review\": \"The paper introduces an algorithm to learn to detect motion from video data using coincidence of features between consecutive frames. While there are novel elements, the basic ideas are similar to papers of the same authors and the algorithm by other authors referenced in this paper.\", \"details\": [\"The formula (20) looks remarkably similar the paper you reference (and that beats your performance) - sec 3.1 of 'Learning hierarchical spatio-temporal features for action recognition with independent subspace analysis'. The only difference is mainly that you use sigmoid nonlinearity and they use square root.\", \"4.2 - which auto encoder you are comparing to - there is a large number of them. In particular what if you just used k-means on concatenated frames? That would also detect 'coincidence' between frames. It would also train extremely quickly (and then use smoother version during inference as in Coates et al).\", \"In table 1 - is this on sequence of frames or pairs? (3.1,3.2 vs 3.3?)\"]}", "{\"reply\": \"-When discussing multiplicative (2.3), isn't this saying we should be working with additive but in the log space? log space is very frequently used in image analysis.\\n\\nYes, that's true. Unfortunately, for learning, one would need to undo any log-transforms to compute reconstructions, so this relationship does not immediately translate into a practical algorithm. \\n\\n-Does it make sense to report results for optical flow standard sets?\\n\\nYes, it may make sense, though it would require a clean-up stage (like an MRF) to be competitive in benchmarks. The model is more general than optical flow, in that it allows pixels to have multiple target positions (like in expansions or transparency, for example). Though is likely to hurt not help in an optical flow benchmark.\"}" ] }
UYmwU4C1wZi16
Feature Graph Architectures
[ "Richard Davis", "Sanjay Chawla", "Philip Leong" ]
In this article we propose feature graph architectures (FGA), which are deep learning systems employing a structured initialisation and training method based on a feature graph which facilitates improved generalisation performance compared with a standard shallow architecture. The goal is to explore alternative perspectives on the problem of deep network training. We evaluate FGA performance for deep SVMs on some experimental datasets, and show how generalisation and stability results may be derived for these models. We describe the effect of permutations on the model accuracy, and give a criterion for the optimal permutation in terms of feature correlations. The experimental results show that the algorithm produces robust and significant test set improvements over a standard shallow SVM training method for a range of datasets. These gains are achieved with a moderate increase in time complexity.
[ "feature graph", "feature graph architectures", "article", "fga", "deep", "systems", "structured initialisation", "training", "improved generalisation performance", "standard shallow architecture" ]
submitted, no decision
https://openreview.net/pdf?id=UYmwU4C1wZi16
https://openreview.net/forum?id=UYmwU4C1wZi16
ICLR.cc/2014/conference
2014
{ "note_id": [ "aK7XSqgON9aOi", "gWPx76RhvA7ue", "2bIiNpLOMP2fT", "v23y6jYd8a2B5", "DeInCVqfH-CS9", "stQhRcrjlysjD" ], "note_type": [ "review", "review", "review", "review", "review", "review" ], "note_created": [ 1391403660000, 1391999400000, 1391832240000, 1392121260000, 1391703720000, 1391835720000 ], "note_signatures": [ [ "anonymous reviewer bbb2" ], [ "anonymous reviewer 1e36" ], [ "Richard Davis" ], [ "Richard Davis" ], [ "anonymous reviewer 07bc" ], [ "Richard Davis" ] ], "structured_content_str": [ "{\"title\": \"review of Feature Graph Architectures\", \"review\": [\"A brief summary of the paper's contributions, in the context of prior work.\", \"Papers suggests to stack multiple machine learning modules on top of each other. Applies it to SVMs.\", \"An assessment of novelty and quality.\", \"Not novel, quality is low.\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": \"Exploration of non-standard deep architectures. Good direction of research.\", \"cons\": \"Paper suggests randomly establish group of features to find correlated groups. This task might be extremely expensive and infeasible. \\n\\nVery poor experiments. One experiment on synthetic data, which is trivial to learn (sum xi) ^ 2, and others on unknown datasets.\\n\\nIt is unclear where is non-linearity coming from. If this are just SVMs stacked on top of each other, and there is no non-linearity in between ? Then entire procedure is just a linear classifier with regularization. \\n\\nPaper doesn\\u2019t state what is optimization objective of the entire system. It just brings an algorithm.\\n\\nDimensionality of data is extremely small ~25 dims.\"}", "{\"title\": \"review of Feature Graph Architectures\", \"review\": \"The authors propose a hierarchical SVM approach for learning. The method uses the prediction of lower layer SVMs as input to the higher-layer SVMs. Adaptive methods are proposed where lower nodes are updated incrementally in order to improve the overall accuracy.\\n\\nI could not find what the proposed algorithm exactly does and what is the exact function implemented by the feature graph architecture. I assume that the authors use linear SVMs for regression. Then, should not a combination of these linear SVMs also be linear? Or is a nonlinear kernel being used? The overall lack of specification of the proposed approach, makes it difficult to compare with existing alternatives (hierarchical kernels, boosting, neural networks).\\n\\nThe paper is rich content-wise, including generalization bounds and stability analysis. Authors derive a generalization bound for the feature graph architecture, showing that surplus error grows linearly with the number of modified nodes in the graph. The generalization bound does not seem to be very tight, as authors show empirically that generalization is on par with basic SVMs.\\n\\nThe authors propose maximizing feature correlation in the first layer as a heuristic to construct the feature hierarchy. This is generally a good idea, but I am wondering whether having only one output per group of correlated features is sufficient in order to learn complex models.\"}", "{\"review\": \"A few comments in response to the above points:\\n\\n1. We think the novel aspects are a) the use of a deep objective function, in which each node is separately optimized b) the additional objective that the global training error must improve to retain node modifications, c) the initialization of the deep SVM to the coefficients of a shallow SVM which guarantees the starting point is a good one, and d) for deep learning in regression, feature learning at a node level can be effectively done by training the node to the target.\\n\\nWe are not claiming that the novelty lies in stacking machine learning modules, which is a commonplace technique. Rather, the main idea is that the deep objective function can produce significantly better results even with a deep SVM using linear kernels (which as you correctly point out is just a linear function), which we feel is an interesting result. \\n\\n2. Learning a non-linear function such as (sum x_i)^2 using linear building blocks is non-trivial, and the method significantly outperformed the other competing methods, some of which were non-linear. We tested other functions with more terms and found similar results.\\n\\n3. We plan to test the method on a much wider range of datasets but the ones we tested are regression datasets from a well-known repository,\", \"http\": \"//www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. We used datasets of dimension ~25 to show that the technique can improve over a standard SVM even for datasets of relatively small dimension.\\n\\n5. The method did show some benefits on a few classification tests but we decided to focus on regression first. \\n\\n6. Finding correlated groups of features can be done quite easily using pairwise correlation tests. Its not necessary to find the optimal permutation, significant gains over the shallow SVM can be achieved even with a fairly good permutation.\\n\\n7. overline denotes a mean over the node outputs. Each SVM is trained using the output of the previous layer, scaling the target so that it has the correct range using the output mean from the current node.\\n\\n8. Adding normalized errors would help with the presentation and can be done easily.\\n\\n9. We achieved significant gains in out-sample error performance using the technique.\\n\\nIf you have time we would be very grateful to hear your thoughts on these areas.\"}", "{\"review\": \"In response to the above comments:\\n1. The main idea is a mechanism which forms a hierarchy of new features which are decorrelated successively from one layer to the next. This mechanism is the reason why the deep architecture is able to decorrelate features better than using PCA in combination with a simple SVM.\", \"we_thought_the_novel_aspects_were_the_following\": \"a) the use of a deep objective function, in which each node is separately optimized \\nb) the additional objective that the global training error must improve to retain node modifications, \\nc) the initialization of the deep SVM to the coefficients of a shallow SVM which guarantees the starting point is a good one, and \\nd) for deep learning in regression, feature learning at a node level can be effectively done by training the node to the target.\\n \\nWe are not claiming that the novelty lies in stacking machine learning modules, which is a commonplace technique. Rather, the multi-layer objective function can produce significantly better results even using layers of linear SVMs (which overall is just a linear function), which we thought was an interesting result.\\n\\n2. We tested regression datasets from a well-known repository, http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. \\n\\n3. Learning a non-linear function such as (sum x_i)^2 using linear building blocks is non-trivial, and the method significantly outperformed the other competing methods, some of which were non-linear. We tested other functions with more terms and found similar results. \\n\\n4. We used datasets of dimension ~25 to show that the technique can improve over a standard SVM even for datasets of relatively small dimension. \\n\\n5. The method did show some benefits on a few classification tests but we decided to focus on regression first. \\n\\n6. Finding correlated groups of features can be done quite easily using pairwise correlation tests. Its not necessary to find the optimal permutation, significant gains over the shallow SVM can be achieved even with a fairly good permutation. \\n\\n7. overline denotes a mean over the node outputs. Each SVM is trained using the output of the previous layer, scaling the target so that it has the correct range using the output mean from the current node. \\n\\n8. Adding normalized errors would help with the presentation and can be done easily. \\n\\n9. We achieved significant gains in out-sample error performance using the technique.\\n\\n10. The gains from the method over a standard shallow SVM are dataset-dependent. If the data have completely uncorrelated features, the method will not improve over the shallow SVM. However if some features are correlated, as is usually the case, the method is able to improve over the shallow SVM using the architecture.\\n\\n11. The method gives a way to move from a shallow to a deep model incrementally so that at each stage the model improves. It has multiple outputs per group of correlated features, since there are multiple layers in the model. This multilayer structure allows for more complex models to be built.\"}", "{\"title\": \"review of Feature Graph Architectures\", \"review\": \"This paper presents a tree-structured architecture whose leafs are\\nSVMs on subsets of attributes and whose internal nodes are SVM taking\\nas input predictions computed by the children.\", \"the_generality_of_the_method_is_unclear\": \"you seem to implicitly assume\\nregression problems but this is never explicitly mentioned. Could the\\nmethod work for classification?\\n\\nThe significance seems modest to me. First, it is unclear in what\\nsense the algorithm performs feature learning as the intermediate\\nlayers contain in facts predictions. Additionally, all the experiments\\nuse a linear kernel and thus from what I understand from Section 3 the\\ntree in Figure 1 computes a linear function of its input. Clearly I\\nmust be missing something otherwise this would not be a deep learning\\nsystem at all. But the presentation should be improved to clarify\\nthis. The notation is also confusing, for example does overline{y}\\ndenote a mean? Over what quantities exactly (targets or predictions)?\\nPseudo-code 2 seem to contradict Figure 1 as each SVM is trained on\\ninputs x (again probably a notational issue).\\n\\nThe experiments are only preliminary and based on small data sets. The\\nreported R and hat{R} seem to be unnormalized, why?\"}", "{\"review\": \"A few comments in response to the above points:\\n\\n1. We think the novel aspects are a) the use of a deep objective function, in which each node is separately optimized b) the additional objective that the global training error must improve to retain node modifications, c) the initialization of the deep SVM to the coefficients of a shallow SVM which guarantees the starting point is a good one, and d) for deep learning in regression, feature learning at a node level can be effectively done by training the node to the target. We are not claiming that the novelty lies in stacking machine learning modules, which is a commonplace technique. Rather, the main idea is that the deep objective function can produce significantly better results even with a deep SVM using linear kernels (which as you correctly point out is just a linear function), which we feel is an interesting result. \\n\\n2. Learning a non-linear function such as (sum x_i)^2 using linear building blocks is non-trivial, and the method significantly outperformed the other competing methods, some of which were non-linear. We tested other functions with more terms and found similar results. \\n\\n3. We plan to test the method on a much wider range of datasets but the ones we tested are regression datasets from a well-known repository, http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. \\n\\n4. We used datasets of dimension ~25 to show that the technique can improve over a standard SVM even for datasets of relatively small dimension. \\n\\n5. The method did show some benefits on a few classification tests but we decided to focus on regression first. \\n\\n6. Finding correlated groups of features can be done quite easily using pairwise correlation tests. Its not necessary to find the optimal permutation, significant gains over the shallow SVM can be achieved even with a fairly good permutation. \\n\\n7. overline denotes a mean over the node outputs. Each SVM is trained using the output of the previous layer, scaling the target so that it has the \\ncorrect range using the output mean from the current node. \\n\\n8. Adding normalized errors would help with the presentation and can be done easily. \\n\\n9. We achieved significant gains in out-sample error performance using the technique. \\n\\nIf you have time we would be very grateful to hear your thoughts on these areas.\"}" ] }
ylE6yojDR5yqX
Network In Network
[ "Min Lin", "Qiang Chen", "Shuicheng Yan" ]
We propose a novel network structure called 'Network In Network' (NIN) to enhance the model discriminability for local receptive fields. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to handle the variance of the local receptive fields. We instantiate the micro neural network with a nonlinear multiple layer structure which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner of CNN and then fed into the next layer. The deep NIN is thus implemented as stacking of multiple sliding micro neural networks. With the enhanced local modeling via micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is more interpretable and less prone to overfitting than traditional fully connected layers. We demonstrated state-of-the-art classification performances with NIN on CIFAR-10, CIFAR-100 and SVHN datasets.
[ "network", "nin", "local receptive fields", "input", "micro neural networks", "feature maps", "network network", "novel network structure", "model discriminability", "conventional convolutional layer" ]
submitted, no decision
https://openreview.net/pdf?id=ylE6yojDR5yqX
https://openreview.net/forum?id=ylE6yojDR5yqX
ICLR.cc/2014/conference
2014
{ "note_id": [ "uR-xqIMBxdyAb", "mma8mUbFZTmHO", "qq53tjQ7E2l0e", "C2LaC6U71vc7A", "of5C4EXSDnUSQ", "0Xn3MvOjNPuwE", "uV5Nz0Zn3dzni", "-9Bh-f15XcPtR", "ttdA6p-ZAy2v8", "BawVzlgW2yDbX", "eU5B1c8wfT1-H", "d_-SeKVCSQ5i0", "eeB4SMv0rsy1X", "RGuEVrAvQgNLZ", "jsD0sli7uUsvW" ], "note_type": [ "comment", "review", "comment", "comment", "review", "review", "review", "review", "comment", "comment", "comment", "comment", "review", "comment", "comment" ], "note_created": [ 1392668040000, 1392859320000, 1397728440000, 1392668160000, 1391225760000, 1391844540000, 1392026460000, 1392601560000, 1392667980000, 1393585440000, 1393766880000, 1393888440000, 1393555560000, 1392668100000, 1393890420000 ], "note_signatures": [ [ "Min Lin" ], [ "anonymous reviewer bae9" ], [ "Min Lin" ], [ "Min Lin" ], [ "anonymous reviewer 5205" ], [ "anonymous reviewer 5dc9" ], [ "Dong-Hyun Lee" ], [ "Çağlar Gülçehre" ], [ "Min Lin" ], [ "Min Lin" ], [ "Min Lin" ], [ "anonymous reviewer 5205" ], [ "Jost Tobias Springenberg" ], [ "Min Lin" ], [ "anonymous reviewer 5205" ] ], "structured_content_str": [ "{\"reply\": \"We fully agree that NIN should be tested on larger images such as imagenet.\\nWe've got reasonable preliminary results on imagenet, but since the performance of maxout on imagenet is unknown, we didn't include it in the paper.\\n\\nAs is mentioned in our reply to Anonymous 5205, we think how dropout should be applied on a model is not yet fully understood; it makes another story itself.\"}", "{\"title\": \"review of Network In Network\", \"review\": \"Authors propose the following modification to standard architecture:\", \"replace\": \"convolution -> relu\\nwith\\nconvolution -> relu -> convolution (1x1 filter) -> relu\\n\\nAdditionally, instead of using fully connected layers, the depth of the last conv layer is the same as number of classes, which they average over x,y position. This generates a vector of per-class scores. \\n\\nThere's a number of internal inconsistencies and omitted details which make reproducing the results impossible. Those issues must be fixed to be considered for acceptance. They do bring up one intriguing idea which gives a novel approach to localization.\\n\\n- Authors would be better off using standard terminology, like I did above, that makes reading the paper easier.\\n\\n- Some space is unnecessarily taken up by issues that are irrelevant/speculative, like discussing that this architecture allows for feature filters that are 'universal function approximators.' Do we have any evidence this is actually needed for performance?\\n\\n- In section 3.2 they say that last averaged layer is fed into softmax, but this contradicts Figure 4 where it seems that last layer features actually correspond to classes, and no softmax is needed. I assumed the later was the intention.\\n\\n- Following sentence is unclear, consider expanding or removing:\\n\\n'A vectorized view of the global average pooling is that the output of the last mlpconv layer is forced into orthogonal subspaces for different categories of inputs'\\n\\n- Most serious shortcoming of this paper is lack of detailed explanation of architecture. All I have to go on is the picture in Figure 2, which looks like 3 spatial pooling layers and 6 convolutional layers. Authors need to provide following information for each layer -- filter size/pooling size/stride/number of features. Ideally it would be in a succinct format like Table 2 of 'OverFeat' paper (1312.6229). We have implemented NIN idea on our own network we used for SVHN and got worse results. Since detailed architecture spec is missing, I can't tell if it's the problem of the idea or the particulars of the network we used.\\n\\nOne intriguing/promising idea they bring up is the idea of using averaging instead of fully connected layers. I expected this to allow one to be able to localize the object by looking at the outputs of the last conv layer before averaging, which indeed seems like the case from Figure 4.\"}", "{\"reply\": \"I'm sorry that I missed this comment, but fortunately I updated on March 4th. Tmr I'll leave Banff, really had a good time here enjoying the talks.\"}", "{\"reply\": \"1. Thanks for the information, we'll cite those papers in the coming version.\\n2. The hyper-parameters of the models are in our supplementary material and will soon be online.\\n3. NIN has a smaller number of parameters than CNN. However, it has lots of nodes thus lots of computation. But since the nodes in NIN are fully parallel, it is not a problem if we have lots of computing nodes just as human brain does.\"}", "{\"title\": \"review of Network In Network\", \"review\": \"Summary of contributions:\\nProposes a new activation function for backprop nets. Advocates using global mean pooling instead of densely connected layers at the output of convolutional nets.\", \"novelty\": \"moderate\", \"quality\": \"moderate\", \"pros\": \"-Very impressive results on CIFAR-10 and CIFAR-100\\n\\t-Acceptable results on SVHN and MNIST\\n\\t-Experiments distinguish between performance improvements due to NIN structure and performance improvements due to global average pooling\", \"cons\": \"-Explanation of why NIN works well doesn\\u2019t make a lot of sense\\n\\nI suspect that NIN\\u2019s performance has more to do with the way you apply dropout to the model, rather than the explanations you give in the paper. I elaborate more below in the detailed comments.\\n\\nDid you ever try NIN without dropout?\\n\\nMaxout without dropout generally does not work all that well, except in cases where each maxout unit has few filters, or the dataset size is very large. I suspect your NIN units don\\u2019t work well without dropout either, unless the micro-net is very small or you have a lot of data. I find it very weird that you don\\u2019t explore how well NIN works without dropout, and that your explanation of NIN\\u2019s performance doesn\\u2019t involve dropout at all.\\n\\nThis paper has strong results but I think a lot of the presentation is misleading. It should be published after being edited to take out some of the less certain stuff from the explanations. It could be a really great paper if you had a better story for why NIN works well, including experiments to back up this story. I suspect the story you have now is wrong though, and I suspect the correct story involves the interaction between NIN and dropout.\\n\\nI\\u2019ve hear Geoff Hinton proposed using some kind of unit similar to this during a talk at the CIFAR summer school this year. I\\u2019ll ask one of the summer school students to comment on this paper. I don\\u2019t think this subtracts from your originality but it might be worth acknowledging his talk, depending on what the summer school student says.\", \"detailed_comments\": \"\", \"abstract\": \"I don\\u2019t understand what it means \\u201cto enhance the model discriminability for local receptive fields.\\u201d\\n\\t\\n\\nIntroduction\", \"paragraph_1\": \"I don\\u2019t think we can confidently say that convolutional net features are generally related to binary discrimination of whether a specific feature is present, or that they are related to probabilities. For example, some of them might be related to measurements (\\u201chow red is this patch?\\u201d rather than \\u201cwhat is the probability that this patch is red?\\u201d) In general, our knowledge of what features are doing is fairly primitive, informal, and ad hoc. Note that the other ICLR submission \\u201cIntriguing Properties of Neural Networks\\u201d has some strong arguments against the idea of looking at the meaning of individual features in isolutian, or interpreting them as probabilistic detectors. Basically I think you could describe conv nets in the intro without committing to these less well-established ideas about how conv nets work.\", \"paragraph_2\": \"I understand that many interesting features can\\u2019t be detected by a GLM. But why does the first layer of the global architecture need to be a nonlinear feature detector? Your NIN architecture still is built out of GLM primitives. It seems like it\\u2019s a bit arbitrary which things you say can be linear versus non-linear. i.e., why does it matter that you group all of the functionality of the micro-networks and say that together those are non-linear? Couldn\\u2019t we just group the first two layers of a standard deep network and say they form a non-linear layer? Can\\u2019t we derive a NIN layer just by restricting the connective of multiple layers of a regular network in the right way?\", \"paragraph_3\": \"Why call it an mlpconv layer? Why not call it a NIN layer for consistency with the title of the paper?\", \"last_paragraph\": \"why average pooling? Doesn\\u2019t it get hard for this to have a high confidence output if the spatial extent of the layer gets large?\", \"section_2\": \"Convolutional Neural Networks\", \"eqn_1\": \"use \\text{max} so that the word \\u201cmax\\u201d doesn\\u2019t appear in italics. italics are for variable names.\", \"rest_of_the_section\": \"I don\\u2019t really buy your argument that people use overcompleteness to avoid the limitations of linear feature detectors. I\\u2019d say instead they use multiple layers of features. When you use two layers of any kind of MLP, the second layer can include / exclude any kind of set, regardless of whether the MLP is using sigmoid or maxout units, so I\\u2019m not sure why it matters that the first layer can only include / exclude linear half-spaces for sigmoid units and can only exclude convex sets for maxout units.\", \"regarding_maxout\": \"I think the argument here could use a little bit more detail / precision. I think what you\\u2019re saying is that if you divide input space into an included set and an excluded set by comparing the value of a single unit against some threshold t, then traditional GLM feature detectors can only divide the input into two half-spaces with a linear boundary, while maxout can divide the input space into a convex set and its complement. Your presentation is a little weird though because it makes it sound like maxout units are active (have value > threshold) within a convex region, when in fact the opposite is true. Maxout units are active *outside* a convex region. It also doesn\\u2019t make a lot of sense to refer to \\u201cseparating hyperplanes\\u201d anymore when you\\u2019re talking about this kind of convex region discrimination.\\n\\nSection 3.1\", \"par_1\": \"I\\u2019d argue that an RBF network is just an MLP with a specific kind of unit.\", \"equation_2\": \"again, \\u201cmax\\u201d should not be in italics\\n\\nSection 4.1\\n\\tLet me be sure I understand how you\\u2019re applying dropout. You drop the output of each micro-MLP, but you don\\u2019t drop the hidden units within the micro-MLP, right? I bet this is what leads to your performance improvement: you\\u2019ve made the unit of dropping have higher capacity. The way you group things to be dropped for the droput algorithm actually has a functional consequence. The way you group things when looking for linear versus non-linear feature detectors is relatively arbitrary. So I don\\u2019t really buy your story in sections 1-3 about why NIN performs better, but I bet the way you use dropout could explain why it works so well.\\n\\nSection 4.2\\n\\tThese results are very impressive!\\n\\tWhile reading this section I wanted to know how much of the improvements were due to global averaging pooling versus NIN. I see you\\u2019ve done those experiemnts later in section 4.6. I\\u2019d suggest bringing Table 5 into this section so all the CIFAR-10 experiments are together and readers won\\u2019t think of this objection without knowing you\\u2019ve addressed it.\\n\\t\\nSection 4.3\\n\\tConvolutional maxout is actually not the previous state of the art for this dataset. The previous state of the art is 36.85% error, in this paper:\", \"http\": \"//arxiv.org/pdf/1312.6082.pdf gets an error rate of only 2.16% with convolutional maxout + convolutional rectifiers + dropout. Also, when averaging the output of many nets, the DropConnect paper gets down to 1.94% (even when not using dropout / DropConnect). Your results are still impressive but I think it\\u2019s worth including these results in the table for most accurate context.\\n\\nSection 4.5\\n\\tI think the table entries should be sorted by accuracy, even if that means your method won\\u2019t be at the bottom.\\n\\nSection 4.6\\n\\tIt\\u2019s good that you\\u2019ve shown that the majority of the performance improvement comes from NIN rather than global average pooling.\\n\\tIt\\u2019s also interesting that you\\u2019ve shown that moving from a densely connected layer to GAP regularizes the net more than adding dropout to the densely connected layer does.\\n\\nSection 4.7\\n\\tWhat is the difference between the left panel and the right panel? Are these just examples of different images, or is there a difference in the experimental setup?\"}", "{\"title\": \"review of Network In Network\", \"review\": \"> - A brief summary of the paper's contributions, in the context of prior work.\\n\\nConvolutional neural networks have been an essential part of the recent breakthroughs deep learning has made on pattern recognition problems such as object detection and speech recognition. Typically, such networks consist of convolutional layers (where copies of the same neuron look at different patches of the same image), pooling layers, normal fully-connected layers, and finally a softmax layer.\\n\\nThis paper modifies the architecture in two ways. Firstly, the authors explore an extremely natural generalization of convolutional layers by changing the unit of convolution: instead of running a neuron in lots of locations, they run a 'micro network.' Secondly, instead of having fully-connected layers, they have features generated by the final convolutional layer correspond to categories, and perform global average pooling before feeding the features into a softmax layer. Dropout is used between mlpconv layers.\", \"the_paper_reports_new_state_of_the_art_results_with_this_modified_architecture_on_a_variety_of_benchmark_datasets\": \"CIFAR-10, CIFAR-100, and SVHN. They also achieve near state-of-the-art performance on MNIST.\\n\\n> - An assessment of novelty and quality.\\n\\nThe reviewer is not an expert but believes this to be the first use of the 'Network In Network' architecture in the literature. The most similar thing the reviewer is aware of is work designing more flexible neurons and using them in convolutional layers (eg. maxout by Goodfellow et al, cited in this paper). The difference between a very flexible neuron and a small network with only one output may become a matter of interpretation at some point.\\n\\nThe paper very clearly outlines the new architecture, the experiments performed, and the results.\\n\\n> - A list of pros and cons (reasons to accept/reject).\", \"pros\": [\"The work is performed in an important and active area.\", \"The paper explores a very natural generalization of convolutional layers. It's really nice to have this so thoroughly explored.\", \"The authors perform experiments to understand how global average pooling affect networks independently of mlpconv layers.\", \"The paper reports new state-of-the-art results on several standard datasets.\", \"The paper is clearly written.\"], \"cons\": [\"All the datasets the model is tested on are classification of rather small images (32x32 and 28x28). One could imagine a few stories where the mlpconv layers would have a comparative advantage on small images (eg. the small size makes having lots of convolutional layers tricky, so it's really helpful to have each layer be more powerful individually). If this was the case, mlpconv would still be useful and worth publishing, but it would be a bit less exciting. That said, it clearly wouldn't be reasonable to demand the architecture be tested on ImageNet -- the reviewer is just very curious.\", \"It would also be nice to know what happens if you apply dropout to the entire model instead of just between mlpconv layers. (Again, the reviewer is just curious.)\"]}", "{\"review\": \"I doubt that the mlpconv layers can be easily implemented by successive 1x1 conv layers. In a 1x1 conv layer, the lower feature maps and the upper feature maps at each location are fully-connected. For examples, 5x5 conv - 1x1 conv - 1x1 conv is equivalent to 5x5 mlpconv layers with 3 local layers.\\n\\nOf course, this work is still interesting and valuable even though my thinking is correct. But in that case, it can be easily implemented by the ordinary CNN packages.\"}", "{\"review\": \"That is an interesting paper. I have a few suggestions and comments about it:\\n\\n(i) We used the same architecture in a paper publised at ICLR 2013 [1] for a specific problem and we called it as SMLP. Two differences in their approach from our paper are, for NIN authors stack several layers of locally connected MLPs(mlpconv) with tied weights, whereas we used only one layer of mlpconv and we didn't use Global Average Pooling. However sliding neural network over an image to do detection/classification is quiet old [2]. I think authors should cite those papers.\\n\\n(ii) Moreover I think authors should provide more details about their experiments and hyperparameters that they have used (such as size of the local receptive fields, size of the strides).\\n\\n(iii) A speed comparison between regular convolutional neural networks and NIN would be also interesting. \\n\\n[1] G\\u00fcl\\u00e7ehre, \\u00c7a\\u011flar, and Yoshua Bengio. 'Knowledge matters: Importance of prior information for optimization.' arXiv preprint arXiv:1301.4083 (2013). \\n[2] Rowley, Henry A., Shumeet Baluja, and Takeo Kanade. Human face detection in visual scenes. Pittsburgh, PA: School of Computer Science, Carnegie Mellon University, 1995.\"}", "{\"reply\": \"Thanks for your detailed comments. The typos will be corrected in the coming version and we address the other comments below:\", \"question\": \"Did you ever try NIN without dropout:\\n\\nYes, but only on CIFAR-10 as CIFAR-10 is the first dataset I used to test the ideas. The reason why we do not explain about dropout is that we think dropout is a generally used regularization method. Any model with sufficient parameters (such as maxout you mentioned) may overfit to training data, which results in not so good testing performance. The same is true with NIN. For CIFAR-10 without dropout, the error rate is 14.51%, which is almost 4% worse than NIN with dropout. It already surpasses many previous state-of-arts with regularizer (except for maxout). We will add this result in the coming version. For a fair comparison, we should compare this performance with maxout without dropout, but unfortunately, maxout performance without dropout is not available in the maxout paper, which is also the main reason we did not report NIN without dropout. \\n\\nIt is suggested in the comment that the performance may involve the interaction between NIN and dropout, and suggested that the grouping of dropout might be the reason.\\n\\nWe found that applying dropout only within the micro-net is not doing as good as putting dropout in between mlpconv layers. (moving the two dropout layers into the micro-net results in a performance of 14.10%). Our interpretation is that the regularization effect of dropout is on the weights, as Wager et al. showed in 'Dropout Training as Adaptive Regularization'. In NIN which has no fully connected layers, most parameters reside in the convolution layer, which is why dropout is applied to the inputs of those layers. In comparison, the number of parameters within the micro-net is negligible. Therefore, rather than saying the way we group dropout with NIN is the reason of the good performance, we would say that dropout acts as a general regularizer. How dropout should be applied on each layer differs among models and is not well understood yet. How dropout should be applied on a model makes another important story itself.\\n\\nFrom the above, we argue that NIN itself is a good model even without dropout, how to apply dropout on a network is a general question, but not specific to NIN.\", \"abstract\": \"CNN filter acts as a GLM for local image patches and it is a discriminative binary classification model. Adding nonlinearity enhances the potential discriminative power of the model for local image patches within the receptive field of the convolution neuron. We'll refine the language in the coming version.\", \"introduction\": \"\", \"paragraph_1\": \"We'll revise the paper and use less certain language when discribing the output value as probability of a specific feature. What we mean is that in the ideal case, it can be the probabilities of latent concepts. I fully agree that the values are measurements rather than probabilities, but again, if ideally the value is highly correlated with the probability, it would be a very good model. I think it is a goal more than a fact. NIN can achieve the goal better than GLM.\", \"paragraph_2\": \"Please see replies to Section 2.\", \"paragraph_3\": \"NIN is a more general structure, as mentioned in the paper. Other nonlinear networks can also be employed to incoporate different priors on the data distribution. For example, RBF assumes a gaussian mixture on the data. Mlpconv is one instantiation of NIN.\", \"last_paragraph\": \"There is a softmax normalization anyways. The high or low confidence is just relative.\", \"section_2\": \"Note that unlike maxout units which can be applied to either convolution or non-convolution structures, NIN has only convolution version; the non-convolution version of NIN degrades to an MLP.\\nIn the non-convolution case, it is equivalent to taking any two layers from the MLP and say it forms a non-linear layer. Thus for MLP, your argument is correct: it does not matter whether the first layer of the network is a linear model or not; multilayers of features overcome the limitation of the linear detector.\\n\\nHowever, it is not true for convolution structure. Stacking convolution layers is different from stacking fully connected layers. Higher convolution layers cover larger spatial regions of the input than lower layers. The stacked convolution layer does not form multilayers of features as is in mlp. To avoid the limitation of the GLM, NIN forms multilayers of features on a local patch.\\n\\nIn my opinion, CNN has two functionalities:\\n1. Partition\\n2. Abstraction\", \"partition\": \"In the object recognition case, lower layers are smaller parts and higher layers learn the deformation relationship between the parts.\", \"abstraction\": \"In traditional CNN, the abstraction of a local patch is done using GLM. Better abstraction of a local patch in the current convolution layer can reduce combinatorial explosion in the next layer.\", \"regarding_maxout\": \"I think it does not matter whether the positive side or the negative side is defined as the active side; they are just symmetric. For any maxout network, we can construct a minout network by reversing the sign of the weights every other layer. As minout is equivalent to maxout, you can consider a minout network and then the positive side is the active side.\\n\\nSection 3.1:\\nWe'll refine the statements. The information we want to convey is that we can incoporate different data priors by choosing the micro-net. For example, RBF models the data in a Gaussian Mixture style, while the GLM in MLP assumes linear subspace structure.\\n\\nSection 4.1:\\nPlease see our response to 'Did you ever try NIN without dropout?'\\n\\nSection 4.2 to 4.5:\\nWe will revise these in the coming version.\\n\\nSection 4.7:\\nThey are just examples of different images.\"}", "{\"reply\": \"- Authors would be better off using standard terminology, like I did above, that makes reading the paper easier.\\n\\nIt is true that 1x1 convolution is an easier to understand explanation regarding the architecture of NIN.\\nHowever, regarding the motivation of the architecture and the mechanism why this architecture work, it is better to explain the architecture as a micro mlp convolving the underlying data.\\nAnother explanation of the structure is cross-channel parameteric pooling, as each of the output feature maps is a weighted summation of the channels in the input feature maps.\\nWe will add the 1x1 convolution explanations in the coming version for easier understanding of the architecture.\\n\\n- Some space is unnecessarily taken up by issues that are irrelevant/speculative, like discussing that this architecture allows for feature filters that are 'universal function approximators.' Do we have any evidence this is actually needed for performance? \\n\\nIn the introduction of the coming version, it is better explained why universal function approximator is prefered to GLM.\\nThe discussion is necessary because it is the motivation of proposing this architecture, and it is our explanation why NIN can achieve a good performance.\\nWe also refer to maxout as a convex function approximator in our paper,\\nand we think maxout and NIN are both evidences that a potent function approximator is better than GLM.\\n\\n- In section 3.2 they say that last averaged layer is fed into softmax, but this contradicts Figure 4 where it seems that last layer features actually correspond to classes, and no softmax is needed. I assumed the later was the intention. \\n\\n1. Each node in the last layer corresponds to one of the classes.\\n2. The values of these nodes are softmax normalized so that the sum equals one.\\nI think there is no incompatibility between the above two.\\n\\n\\n- Following sentence is unclear, consider expanding or removing: 'A vectorized view of the global average pooling is that the output of the last mlpconv layer is forced into orthogonal subspaces for different categories of inputs'\\n\\nGlobal average pooling is equal to vectorizing the feature maps and do a linear multiplication with a predefined matrix, the rows of the matrix lies within orthogonal linear subspaces.\\nWe'll remove this sentence in the coming version.\\n\\n\\n- Most serious shortcoming of this paper is lack of detailed explanation of architecture. All I have to go on is the picture in Figure 2, which looks like 3 spatial pooling layers and 6 convolutional layers. Authors need to provide following information for each layer -- filter size/pooling size/stride/number of features. Ideally it would be in a succinct format like Table 2 of 'OverFeat' paper (1312.6229). We have implemented NIN idea on our own network we used for SVHN and got worse results. Since detailed architecture spec is missing, I can't tell if it's the problem of the idea or the particulars of the network we used.\\n\\nThe details of NIN used for the benchmark datasets will be in the supplementary material that will be added in the comming version. The code (derived from cuda-convnet), the definition files and parameter settings are published and will be completed on my github (https://github.com/mavenlin/cuda-convnet)\"}", "{\"reply\": \"Hi Jost,\\n\\nI initialized the hyperparameter according to the parameters released by the maxout paper. For CIFAR-10, there are two things I tuned, one is the weight decay, and the other is the kernel size of the last layer (3x3 instead of 5x5).\\n\\nTuning the weight decay gives me most of the performance. The other settings, such as 5x5 kernel size instead of 8x8; I just set them once and they were not tuned for performance. \\n\\nI think the t tunable range for kernel size is quite small. It depends on the size of the object within the image. I've no idea whether it would affect the performance that much. I'm very interested about this, looking forward to seeing the effect of hyperparameters.\"}", "{\"reply\": \"I think it would be perfectly valid to report your result on CIFAR-10 without dropout. It would definitely be nice to have a fair comparison between maxout and NIN without dropout but the NIN number alone is still interesting.\"}", "{\"review\": \"Interesting Paper. I think there are a lot of possibilities hidden in\\nthe Ideas brought up here - as well as in the general Idea brought up\\nby the maxout work. I have a short comment to make regarding the\\nperformance that you achieve with the 'Network in Network' model:\\n\\nAlthough I do not think that this should influence the decision on\\nthis paper (nor do I think it takes anything away from the Network \\nin Network idea) I want to make you aware that I believe a large part\\nof your performance increase over maxout stems from your choice of\\nhyperparameters.\\n\\nI am currently running a hyperparameter search for maxout for a paper\\nsubmitted to ICLR (the 'Improving Deep Neural Networks with\\nProbabilistic Maxout Units' paper). The preliminary best result that I\\nobtained for a maxout network on CIFAR-10 without data augmentation \\n(using the same amount of units per layer as in the original maxout\\npaper) is 10.92 % error. If I understand it correctly then this is\\napproximately the same as the NiN model with a fully connected layer. \\nThe hyperparameter settings for this model are very similar to the\\nsettings I assume were used in your paper (based on the parameter file \\nyou posted here\", \"https\": \"//github.com/mavenlin/cuda-convnet/blob/master/NIN/cifar-10_def).\\n\\nThe most crucial ingredient seems to be the pooling and filter/kernel\\nsize. I will post more details on the hyperparameter settings in the\\ndiscussion on the 'Probabilistic Maxout' paper.\"}", "{\"reply\": \"Yes, in the node sharing case (which is used in the experiment of this paper), it is equivalent to convolution with kernel size 1. By the way, the overfeat paper submitted in iclr2014 uses 1x1 convolution kernel in the last layer. It is true you can use the convolution function in ordinary CNN packages, but the most efficient way is to use matrix multiplication functions in cublas.\"}", "{\"reply\": \"Do you think you could post the revised version of your paper soon? We have until Mar 7 to discuss it. If you could post the revised version before then I'm likely to upgrade my rating of the paper. I don't feel comfortable upgrading the rating just based on discussions on the forum though. Feel free to post the updated version on a separate website so we don't need to wait for it to be approved on ArXiv.\"}" ] }
nF5CFb0ZQBFDr
Sequentially Generated Instance-Dependent Image Representations for Classification
[ "Matthieu Cord", "patrick gallinari", "Nicolas Thome", "Ludovic Denoyer", "Gabriel Dulac-Arnold" ]
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. Our strategy is based on a sequential process that learns to explore the different regions of any image in order to infer its category. In particular, the choice of regions is specific to each image, directed by the actual content of previously selected regions.The capacity of the system to handle incomplete image information as well as its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. We demonstrate the system's abilities in a series of image-based exploration and classification tasks that highlight its learned exploration and inference abilities.
[ "image", "system", "image representations", "classification", "classification tasks", "new framework", "image classification", "spatial representations", "strategy", "sequential process" ]
submitted, no decision
https://openreview.net/pdf?id=nF5CFb0ZQBFDr
https://openreview.net/forum?id=nF5CFb0ZQBFDr
ICLR.cc/2014/conference
2014
{ "note_id": [ "ZEmlDdfwa3ELS", "-nuS-uE1S_-5F", "QPb2hbXfGmPiJ", "QKENHiYtpkQo3", "bbUH2ilEBoLmK", "-Kr0sEe20qsTx" ], "note_type": [ "comment", "review", "review", "review", "review", "comment" ], "note_created": [ 1392155820000, 1391488800000, 1391486100000, 1390861200000, 1391825400000, 1392155640000 ], "note_signatures": [ [ "Gabriel Dulac-Arnold" ], [ "anonymous reviewer 4a3d" ], [ "Gabriel Dulac-Arnold" ], [ "anonymous reviewer 251e" ], [ "anonymous reviewer 56b5" ], [ "Gabriel Dulac-Arnold" ] ], "structured_content_str": [ "{\"reply\": \"First of all, thank you for your time spent reviewing this article and\\nfor your extensive comments. I've added a couple clarifications\\nregarding experiments on the PPMI dataset, you were indeed correct in\\nyour assumptions. I hope the experiments are now easier to understand.\", \"let_me_respond_to_your_4_points\": \"1) In the case of PPMI, classification of an image as one where humans\\nare playing an instrument vs. not playing can be done using only regions\\nconcentrating on the instrument's interface with the human (mouth, hand,\\netc.). Any other regions in the image simply induce noise. For\\nexample, if we choose the static policy of looking only at the 4 central\\nregions with the SVM classifier, we actually increase classification\\naccuracy compared to looking at all regions. Additionally, the\\ndifference between the two datasets is that the 15 scenes dataset's task\\nof detecitng the general environment in which the image was taken means\\nthat classification information is much less concentrated in specific\\nregions of the image, but rather spread out over the entire image. This\\nmay also explain why there is not such a strong advantage in using our\\nmethod vs a random region selection policy for the 15 scenes dataset. \\nI've added a brief mention of these aspects to the article.\\n\\n2) Indeed, this behaviour would be most intuitive, however one possible\\ninterpretation of these resuls is that regions preferred by B=8 are very\\ninformative, but only if /all/ can be acquired, and if only a subset can\\nbe acquired then it is better to select region (1,3). Of course this is\\npure conjecture, it is unfortunately very difficult to understand /why/\\nthe algorithm performs the trade-offs that it does, but this is indeed\\nan interesting remark, and warrants further investigation. \\n\\n3) If I understand correctly, once we would have trained the chain a\\nfirst time, we could run the training examples down the chain, and train\\nthe final classifier with the new distribution of provided regions, and\\nthen re-train the chain again using this new final classifier. This is something we've attempted without significant increase in performance, and significantly increases learning complexity.\\n\\n4) Ideally, if there is some way to estimate the 'best' of the 'bad'\\nactions during training, then it might be a good idea to use this as a\\npositive training example, but ideally with an associated weight to\\nindicate that it is however not as good as an actually optimal action.\\nIn the case of a binary classification task there is no obvious way to\\ninstore an order amongst all the 'bad' actions since they all result\\nultimately in incorrect classification, however if there is some\\nstructure on the label set with a similarity measure, then this would be\\nimportant information to leverage, thus allowing us to penalize the\\nclassifier with a negative reward related to how 'far off' the\\nclassifier's prediction was in label space.\"}", "{\"title\": \"review of Sequentially Generated Instance-Dependent Image Representations for Classification\", \"review\": \"This paper describes a method to select the most relevant grid regions from an input image to be used for classification. This is accomplished by training a chain of region selection predictors, each one of which outputs the k'th region to take, given the image features from k-1 already-selected regions. Each selector is trained to choose a region that leads to an eventual correct classification, given already-trained downstream selectors. Since downstream predictions are required for training, the chain is trained last-to-first, and random region selection is used to generate training inputs at each stage. Both the conditional region selection chain and its training method are interesting contributions.\\n\\nThe method is evaluated on two tasks, playing vs. holding a musical instrument (PPMI) and outdoor scene classification (15-Scenes). Here, I wish the authors were more detailed in their descriptions of these tasks. In particular, I'm a bit unsure whether the PPMI task is a 12-way classification on musical instruments, or an average of 12 binary classification tasks (playing vs holding), one for each instrument. I think it's the latter -- if so, I'm also unclear on whether a different selection/classification chain was trained and tested for each of the 12 subsets, or if a single classifier was trained over the entire dataset.\\n\\nStill, the proposed method beats a random selection baseline for both tasks (though not by much for 15-Scenes), and for PPMI it also beats a baseline of including all regions. The latter is a particularly nice result, since intuitively region selection should stand to help performance, yet such gains can be hard to find. Evaluating the 12- or 24-way instrument classification task for PPMI would have been good here as well, though, as there is clearly a compatibility between region selection and this data and/or task, and this may help provide insight into why that is.\", \"pros\": [\"Interesting and new method of region selection trained for classification\", \"Shows a nice gain in one task and reasonable results in another\", \"Interesting discussion sheds light on how the method operates (figs 7, 8)\"], \"cons\": [\"Tasks and datasets could be better explained\"], \"questions\": [\"Why does selecting 8 regions beat using all 16 for PPMI but is only about the same for 15-Scenes? Some discussion on the difference between the two datasets and their fit with region selection would be nice here.\", \"Fig. 8: I might have expected the B=8 (right) histogram to have its highest values mostly where the B=4 (left) histogram does, since one would think the best regions would be required in both cases. However, region 3 (x=1,y=3) seems to be used with more frequency for B=4 than B=8, for example. Why does this occur?\", \"Is it possible to continue retraining the classifier and selectors? Currently the chain is trained once, starting with the classifier and proceeding upstream. Yet by doing this, each stage must be trained on a random sample of input regions, which can include many more configurations than would be seen at test time. Could each stage (particularly the final classifier f) be iteratively retrained given *all* selectors? Would this help by adjusting the training distribution closer to the test distribution and allowing better use of resources, or might it lead to worse generalization by narrowing the training set too much?\", \"Alg. 4 adds a sample to the training set only if it leads to a correct prediction. But what if no region has this property -- is it better to ignore these cases, or should some be included (perhaps by trying to predict the choice closest to correct)? Surely such cases will arise at test time, and the consequences of ignoring them isn't entirely clear. I suppose there's an argument to be made that the classifier will eventually fail anyway, so it's better to bail on these cases and concentrate the predictor's resources only on those where it stands a chance. But in cases where the classifier is wrong, might this also lead to more arbitrary region selection and more drastic types of mistakes (e.g. mistaking a clarinet for a harp vs a recorder)?\"]}", "{\"review\": \"Thank you for your time and effort on reviewing our article. Let me respond in a couple points to your comments:\\n\\n1. Classifier complexity: In effect, the training algorithm is more complex than a standard SVM, but it only increases in complexity linearly relative to the fixed budget B and the number of windows. Inference complexity is lower than an SVM computed on the entire picture, and is B times more complex than a fixed SVM on a B-sized window. \\n\\n2. With regards to the work of Larochelle & Hinton on foveal glimpse learning, there is a fundamental difference in what is being learned by the region selection algorithm. L&H learn a gaze direction model that greedily chooses regions that are most likely to increase the final classifier's output given the current state. In our model, intermediate sub-policies learn to select regions that help their subsequent policy the most. The final sub-policy indeed learns to select a Bth region that helps classification, but each of the previous policies learns to select a region that will best disambiguate the image information given the curernt state, only ultimately helping in classification. This detail is important, as a greedy system will not spend time on image regions that are irrelevant to classification, but necessary to region selection. Typically, in Experiment 2, H&L's system must be given a starting point, and would (as far as we can tell) not be able to find it on its own.\"}", "{\"title\": \"review of Sequentially Generated Instance-Dependent Image Representations for Classification\", \"review\": \"This paper presents an approach that considers a sequence of local representations of an image, in order to classify it into one of many labels. The approach decomposes an image into multiple non-overlapping sub-windows, and tries to find a sequence of subsets of these sub-windows that can efficiently lead to classify the image. The idea is interesting as it could potentially classify faster by concentrating only on the relevant part of the image; on the other hand, the training complexity is significantly increased (and I suspect for the approach to work we should include many more sub-windows at various scales and potentially with overlaps). The proposed approach is not compared to any other approaches, for instance the work of Larochelle and Hinton, 2010. The results are encouraging but not groundbreaking: it seems one needs to see a significant portion of the image in order to get similar or better performance than the baseline, so it's not clear the proposal works that well. I wonder if the policy used to guide the search space among sub-windows could be analyzed better.\"}", "{\"title\": \"review of Sequentially Generated Instance-Dependent Image Representations for Classification\", \"review\": \"This paper tackles the problem of deciding where to look on an image. The proposed solution is to start from a center region and use the information extracted to decide what region to examine next. This is trained through reinforcement learning.\\n\\nThe paper is well organized and clear. Experiments on 2 benchmarks (15 scenes and people playing musical instruments) show that selecting a smaller number of subregions of the image does not result in a big loss of accuracy, or even improves accuracy by eliminating noise from information-poor regions.\\n\\nFigures 5 and 6 have unreadable annotations, this should be fixed and/or the caption under the figure fleshed out to better describe the results.\\n\\nThe datasets are limited and this would need to be extended to more realistic datasets, but the problem tackled is important and the proposed solution is a welcome step in this direction.\"}", "{\"reply\": \"Thank your for your time spent reviewing this article, I've expanded the\\ndescription of both Figures 5 & 6, but I'm not sure which part was\\ndifficult to read, the legend, or the axes? I hope the modifications\\nwere helpful.\"}" ] }
u-IAYCzRsK-vN
Sparse similarity-preserving hashing
[ "Alex M. Bronstein", "Guillermo Sapiro", "Pablo Sprechmann", "Jonathan Masci", "Michael M. Bronstein" ]
In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse high-dimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multi-modal data shows the benefits of the proposed method.
[ "sparse", "hashing", "recent years", "lot", "attention", "nearest neighbor search", "means", "plights", "techniques", "intrinsic" ]
submitted, no decision
https://openreview.net/pdf?id=u-IAYCzRsK-vN
https://openreview.net/forum?id=u-IAYCzRsK-vN
ICLR.cc/2014/conference
2014
{ "note_id": [ "XwIVamM38ga3N", "ZZ7TU_nWTrjcG", "XMixXVP-xZMUW", "VVtOnEaB7_WSx", "GG8MxUz35hkXl", "BcgDqCiUDYcXQ", "LRcNRh6ZNB7T9", "mYLe3dIVG_lnh", "TR-OT62E8KhmZ" ], "note_type": [ "review", "comment", "review", "comment", "comment", "review", "comment", "review", "review" ], "note_created": [ 1391787780000, 1392729600000, 1391901780000, 1391604240000, 1392299400000, 1391511480000, 1391965500000, 1392652500000, 1391867160000 ], "note_signatures": [ [ "Jonathan Masci" ], [ "Jonathan Masci" ], [ "anonymous reviewer a029" ], [ "Jonathan Masci" ], [ "Jonathan Masci" ], [ "anonymous reviewer a636" ], [ "Jonathan Masci" ], [ "Jonathan Masci" ], [ "anonymous reviewer d060" ] ], "structured_content_str": [ "{\"review\": \"We updated the paper and the new version will be available\\non Mon, 10 Feb, 01:00 GMT.\"}", "{\"reply\": \"The revised paper is now available.\"}", "{\"title\": \"review of Sparse similarity-preserving hashing\", \"review\": \"This paper builds upon Siamese neural networks [Hadsell et al, CVPR06] and (f)ISta networks [Gregor et al, ICML10] to learn to embed inputs into sparse code such that code distance reflect semantic similarity. The paper is clear and refers to related work appropriately. It is relevant to the conference. Extensive empirical comparisons over CIFAR-10 and NUS/Flickr are reported. I am mainly concerned by the computational efficiency motivation and the experimental methodology. I am also surprised that no attention to sparsity is given in the experiments given the paper title.\\n\\nThe introduction states that your motivation for sparse code mainly comes from computational efficiency. It seems of marginal importance. The r-radius search in an m dimensional space is nchoosek(m,r). With k sparse vectors, the same search now amount to flipping r/2 bits in the 1 bits and r/2 bits in the 0 bits in the query, i.e. nchoose(k,r/2)+nchoose(m-k, r/2). Both are comparable combinatorial problems. I feel that your motivation for sparse coding could come from the type of work you allude to at the end of column 1 in page 4 (by the way could you add some references there?). \\n\\nI have concerns about the evaluation methodology. In particular, it is not clear to me why you compare different methods with a fixed code size and radius. It seems that, in an application setting, one might get some requirement in terms of mean average precision, recall at a given precision, precision at a fix recall, expected/worst search time, etc and would validate m and r to fit these performance requirement. Fixing m and r a priori and looking at the specific precision, recall point resulting from this arbitrary choice seems far from optimal for any methods. Moreover, I would also like to stress that m, r are also a poor predictor of a method running time given that different tree balance (number of points in each node and in particular the number of empty nodes) might yield very different search time at the same (m,r). In summary, I see little motivation for picking (m,r) a priori. \\n\\nI am also surprised that no attention to sparsity is given in the experiments given the paper title. How was alpha validated? What is the impact of alpha on validation performance, in particular how does the validation error surface look like wrt alpha, m and r? It might also be interesting to look at the same type of results replacing L1 with L2 regularization of the representations to further justify your work. Also reporting the impact of sparsity on search time would be a must.\"}", "{\"reply\": \"We are grateful to reviewer Anonymous a636 for constructive comments. We reply below to the points raised by the reviewer; we will post additional results requested here and eventually update the arxiv paper.\\n\\n\\n1. We agree (and in fact note it in the paper) that when comparing dense and sparse hashes one has to compare the same number of **degrees of freedom** rather than **length**. Thus, comparing sparse and dense hashes of same length is actually less favorable to sparse hash (and despite that, we show to perform better even in this unfavorable comparison).\\nReferring specifically to our results, closely comparable hashes would be sparse hash of length 128 and dense hash of length 48 (the exact number of degrees of freedom depends on the sparsity, which varies slightly). \\nBecause of space limitations we removed the sparsity levels from our tables. They are as follows:\\n\\nCIFAR10\\nm 48, M 16, alpha 0.01, lambda 0.1, L0 20.6%\\nm 48, M 7, alpha 0.001, lambda 0.1, L0 39.1%\\nm 48, M 7, alpha 0.001, lambda 0.1, L0 43.9%\\nm 128, M 16, alpha 0.0, lambda 0.1, L0 6.0%\\n\\nNUS\\nm 64, M 7, alpha 0.05, lambda 0.3, L0 17.4%\\nm 64, M 7, alpha 0.05, lambda 1.0, L0 20.1%\\nm 64, M 16, alpha 0.005, lambda 0.3, L0 21.7%\\nm 256, M 4, alpha 0.05, lambda 1.0, L0 6.6%\\nm 256, M 4, alpha 0.05, lambda 1.0, L0 9.4%\\nm 256, M 6, alpha 0.005, lambda 0.3, L0 9.9%\\n(here L0 means the number of non-zeros, in % of the hash length m)\\n\\n\\n2. Retrieval is done as follows:\\nFor large radius (r=m) the search is done exhaustively with complexity O(N), where N is the database size. \\nFor r=0 (exact collisions), the search is done as a LUT: the query code is fed into the lookup table, containing all entries in the database having the same code. The complexity is O(m), independent of N. \\nFor small values of r (partial collisions), the search is done as for r=0 using perturbation of the query: at most r bits of the query are changed, and then it is fed into the LUT. The final result is the union of all the retrieval results. Complexity is O(r out of m). \\n\\n\\n3. Sign in ISTA-net: \\nOur initial formulation converted the output to a ternary representation, doubling the number of bits, and explicitly coding for -1, 0 and +1. However, we found out that the difference of this more proper encoding vs the plain output of the net was negligible and therefore we opted for the simplest solution.\\n\\n\\n4. Eq 4: \\nIt is a typo, there is a max(0, .). Thanks for pointing it out.\\n\\n\\n5. PR curves were generated using the ranking induced by the Hamming distance between the query and the database samples. In case of r<m we considered only the results falling into the Hamming ball of radius r.\", \"6\": \"Table 4 last line:\\nThe implementations of NN and NN-sparse hashes differ. For sparse hash, we use shrinkage and tanh whereas NN-hash uses only tanh.\\nAdditionally the two losses also differ (for sparse hash, we measure the L1 distance which we found out already induces some sparsity). These are, in our opinion, the two main differences which explain the different performance and that sparse hash does not produce exactly the same results as NN-hash for alpha=0.\\n\\n\\n7. Additional experiments: \\nThanks for the suggestion. We will perform the requested experiments and will post them here at a later stage. \\n\\n\\n8. We used a single iteration of ISTA-net in all experiments.\\n\\n\\n9. Fig 6 left: \\nthe dotted curve (m=128) for NN is there, right above the KSH (dotted purple).\\nagh2 will be added in the updated version of the paper.\"}", "{\"reply\": \"We are grateful to reviewer Anonymous a029 for his/her comments. We clarify below all the critical issues, and these points are addressed in the revision. We have produced additional evaluations to better convey our point; we invite all the reviewers to look at these results. Below are responses to the issues raised:\\n\\n\\n1. Please note that we *do not* use sparsity in the retrieval. After the code is constructed, we use standard retrieval procedure (LUT for small r, brute force for large r, see our answers to the previous reviews) both for dense and sparse codes. Using small r guarantees fast retrieval; however, in the dense case, it comes at the expense of the recall. We show that introducing sparsity, we get high recall for small r, thus guaranteeing both fast search and high recall, which is usually impossible with standard methods. \\n\\nWe do believe, however, that it is also possible to take advantage of sparse codes to make the retrieval more efficient, and intend to explore this direction in future work. In particular, in our experiments we observed that the number of unique codes is significantly smaller for sparse hash compared to dense hash, which potentially allows an improvement of the data structure used for retrieval (an overwhelming number of LUT entries are empty). Here are results obtained on CIFAR10 datasets for m=10:\\n\\nAverage number of database elements mapped to the same code (collisions):\\nSparseHash\\t798.47\\nKSH\\t\\t\\t3.95\\nNNHash\\t\\t4.83\\nSSH\\t\\t\\t1.01\\nDH\\t\\t\\t1.00\\nAGH\\t\\t\\t1.42\\n\\n\\n2. \\nAs requested by the reviewer, we have computed the timing for the experiments presented in the paper, and present the 3D plot of precision/recall/retrieval time for different methods for varying r:\", \"https\": \"//www.dropbox.com/s/td7pyc2hwwch2s1/sparsehash_timing_results.png\", \"annotation\": \"o (r=0), triangle (r=1), + (r=2, implemented as brute force search)\", \"we_can_conclude_that\": [\"search time is controlled mainly by the radius r, which also impacts the recall/precision. With dense methods, it is impossible to achieve fast search and high precision/recall. The use of sparsity makes this possible.\", \"With sparse hash we are able to achieve orders of magnitude higher recall as well as an increase in precision for retrieval time comparable with the dense methods.\", \"Our retrieval data structure does not currently take advantage of the code sparsity, suggesting a potentially significant reduction in search time.\", \"3. Evaluation methodology:\", \"We find the reviewer's worry about the (m,r) being a poor predictor of the search time is very reasonable. However, we believe that the comparable search times reported for the same (m,r) settings in all methods suggests that it is not an issue in our specific experiments.\"]}", "{\"title\": \"review of Sparse similarity-preserving hashing\", \"review\": \"This work presents a similarity-preserving hashing scheme to produce binary codes that work well for small-radius hamming search. The approach restricts hash codes to be sparse, thereby limiting the number of possible codes for a given length of m bits. A small-radius search within the set of valid codes will therefore have more hits, yet the total bit length can be lengthened to allow better representation of similarities. According to the authors, this is the first application of sparsity constraints in the context of binary similarity hashes.\\n\\nOverall I think this is a nice, well-motivated idea and corresponding implementation, with experiments on two datasets that demonstrate its effectiveness and mostly support the claims made in the motivation. As a bonus, the authors describe a further adaptation to cross-modal data.\\n\\nI still feel there are some links missing between the analysis, implementation and evaluation that could be made more explicit, and have quite a few questions (see below).\", \"pros\": [\"Well-argued idea for improving hashing by restricting the code set to enable targeting small search radii and large bit lengths\", \"Experiments show good system performance\"], \"cons\": [\"Links between motivational analysis, implementation and evaluation could be made more explicit\", \"Related to that, some claims alluded to in the motivation don't appear fully supported, e.g. more efficient search for k-sparse codes doesn't seem realized beyond keeping r small\"], \"questions\": [\"You mention comparing between k-sparse codes of length m and dense codes with the same degrees of freedom, i.e. of length log(m choose k). This seems very appropriate, but the evaluation seems to compare same-length codes between methods. Or, do the values of m reflect this comparison? m=128 v. m=48 may work if k is around 10. But I also don't see anything showing the distribution of nonzeros in the codes.\", \"pg. 4: 'retrieving partial collisions of sparse binary vectors is ... less complex ... compared to their dense counterparts': Could this be explained in more detail? It seems a regular exhaustive hamming search is used in the implemented system (especially since there appears to be no hard limit on k, so any small change can valid for most codes).\", \"The ISTA-net uses a sign-preserving shrink, and the outputs are binarized with tanh (also preserving sign) -- thus nonzeros of the ISTA-net can saturate to either +/- 1 depending on sign, while zeros are mapped to 0. These are 3 values, not 2, so how are they converted to a binary vector, and how does this align with the xi(x) - xi(x') in the loss function (which seems to count a penalty for comparing +1 with 0 and a double-penalty for comparing +1 with -1)?\", \"Eqn. 4: Distance loss between codes is L1 instead of L2, and there is no max(0, .) on the negatives (but still a margin M). Are these errors or intended?\", \"I'm not sure how the PR curves were generated: What ranking was used?\", \"Table 4 last line: Says alpha=0; it seems the sparsity term would be disabled if alpha=0, so not sure why results here are better than NN-hash instead of about the same?\"], \"minor_comments\": [\"Would have liked to see more comparing level of sparsity vs. precision/recall for different small r. There is a bit of this in the tables, but it would be interesting to see more comprehensive measurements here. It would be great if there was a 2d array with e.g. r on the x axis and avg number of nonzeros on the y axis, for one or more fixed m.\", \"How many layers/iterations were used in the ISTA-net?\", \"Fig 6 left, curves for m=128 appear missing for nnhash and agh2\"]}", "{\"reply\": \"We are grateful to Anonymous d060 for interesting comments and for appreciating our work.\\nSome of the same comments are already addressed in the updated v2 of the arxiv report that should appear on Mon Feb 10. \\nSince d060 has spotted the same issue as a636, we kindly ask the reviewer to also look at our previous response to a636. \\nWe will incorporate the new comments and upload a new version to arxiv.\\n\\n \\n1. Retrieval:\\n\\nFor large radius (r=m) the search is done exhaustively with complexity O(N), where N is the database size. \\nFor r=0 (exact collisions), the search is done as a LUT: the query code is fed into the lookup table, \\ncontaining all entries in the database having the same code. \\nThe complexity is O(m), independent of N. For small values of r (partial collisions), \\nthe search is done as for r=0 using perturbation of the query: at most r bits of the query are changed, and then it is fed into the LUT. The final result is the union of all the retrieval results. Complexity is O(r out of m). \\nFor this reason, one seeks to use a small radius to obtain efficient search. With dense hash, this comes at the expense of\\nvery low recall, as we explain theoretically and show experimentally. \\nWith sparse hash, we are able to control the exponential growth of the hamming ball volume, thus resulting in much higher recall. \\n\\nIt is correctly noted by the reviewer that our method does not explicitly guarantee a fixed sparsity (it is possible to use a different NN architecture,\\nderived from coordinate-descent pursuit CoD, to guarantee at most k non-zeros in the codes). However, we see in practice that the number of non-zeros in our codes is approximately fixed (e.g. for cifar10 sparse hash of length m=128 we get codes containing on average 7.6 +/- 2.3 non-zero bits), and the behavior of the codes is similar to the theoretical case with 'guaranteed' sparsity. To emphasize, sparsity is used to obtain codes that exhibit higher recall at low search r (in particular, r=0). The search is done in a standard way described above, without making a distinction between sparse and dense cases. \\nNote that lack of exact control of sparsity is common in l1 optimization problems, though as mentioned above the approximate control was found to be sufficient for this application as well.\\n\\n \\n2. Formula 4: \\nWe fixed formula (4) which missed the max(0,.) term.\\n\\n\\n3. Calculating neighbors with large radii: \\nIn our experiments, we used three radii: r=0 (collisions), r=2 and r=m (full radius). In the latter setting, we used 'brute force' search, going exhaustively through all the database. \\n\\n\\n4. Parameters setting:\\nThe parameters were set empirically. We should stress we have not optimized these parameters, as we observed that setting them more or less arbitrarily provided performance significantly better than the competing dense hashing methods. \\nLambda is initially set to 1 and after reduced to balance the positive and negative classes if needed.\\nA value of 0.1 in CIFAR10 equally weights the positives and negatives for example.\\nIn NUS we used 0.3 instead because each sample can belong to multiple classes and therefore distinction between pos and neg is not as clear as for CIFAR10.\\nalpha is set to a small value such as 0.01 and decreased by a factor of 10 according to the desired sparsity level. \\nThe margin M is usually 7 and we would suggest to use this as we did extensive evaluation in previous work. \\nIn the experiments we increased it along with a higher alpha to check if this would allow better binarization. Ideally a large margin tends to saturate units and a large sparsity should further favor this phenomenon.\\nIn the multimodal case mu_1 and mu_2 are set to 0.5 to use the respective modalities as regularization for the cross-modal embedding.\\nWe would suggest to use this configuration and change it only if needed.\"}", "{\"review\": \"A new version (v3) of the paper will be available at Tue, 18 Feb 2014 01:00:00 GMT.\"}", "{\"title\": \"review of Sparse similarity-preserving hashing\", \"review\": \"The authors propose to use a to use a sparse locally sensitive hash along with an appropriate hashing function. The retrieval of sparse codes has different behaviour then dense code, which results in better performance then other other methods especially at low retrieval radius where computation is cheaper.\", \"novelty\": \"Good (as far as I know).\", \"quality\": \"Good, clearly explained except few details (see below), with experimental evaluation.\", \"details\": [\"How do you define retrieval at a given radius for sparse codes? With two bit flips say, there are the same number of neighbours whether the code is dense or sparse. With the encoding proposed you don't have a guaranteed that only k values will be nonzero - it is not a strict bound. How do you define the neighbours - as only those that have the same or lower sparsity?\", \"Is formula (4) correct, specifically line 2? I assume it should be more like the line 2 of eq. (3).\", \"In the experiments, how did you calculate neighbours of such large radii - the number of neighbours grows as Choose(m, r).\", \"There are a lot of hyper parameters: eq.4: lambda, alpha, M + eq. 5 mu_1, mu_2. How did you choose these? If your answer is 'cross validation' - how theses are a lot of parameters to cross validate. Do you have any good ways to set these?\", \"Even though this is basic it would be good to explain in section 2(Efficient Retrieval) how is the retrieval defined - what is the situation?\"]}" ] }
YDXrDdbom9YCi
Large-scale Multi-label Text Classification - Revisiting Neural Networks
[ "Jinseok Nam", "Jungi Kim", "Iryna Gurevych", "Johannes Fürnkranz" ]
Large-scale datasets with multi-labels are becoming readily available, and the demand for large-scale multi-label classification algorithm is also increasing. In this work, we propose to utilize a single-layer Neural Networks approach in large-scale multi-label text classification tasks with recently proposed learning techniques. We carried out experiments on six textual datasets with varying characteristics and size, and show that a simple Neural Networks model equipped with recent advanced techniques for Neural Networks components such as an activation layer, optimization, and generalization techniques performs as well as or even outperforms the previous state-of-the-art approaches on large-scale datasets with diverse characteristics.
[ "neural networks", "text classification", "datasets", "available", "demand", "classification algorithm", "work", "text classification tasks", "learning techniques", "experiments" ]
submitted, no decision
https://openreview.net/pdf?id=YDXrDdbom9YCi
https://openreview.net/forum?id=YDXrDdbom9YCi
ICLR.cc/2014/conference
2014
{ "note_id": [ "1QRFgLal6wk-f", "FMBUveVoQjvA1", "jArLXVnW-4AIQ", "alkAlHVICypit" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1391466660000, 1391405160000, 1392664560000, 1391695620000 ], "note_signatures": [ [ "anonymous reviewer 1ddd" ], [ "anonymous reviewer 3ccf" ], [ "Jinseok Nam" ], [ "anonymous reviewer 9528" ] ], "structured_content_str": [ "{\"title\": \"review of Large-scale Multi-label Text Classification - Revisiting Neural Networks\", \"review\": \"This paper tackles the problem of multi-label classification using a single-layer neural network. Several options are considered such as thresholding, using various transfer functions or dropouts.\\n\\nThe paper is full of imprecisions, writing errors that make it hard to read (the paragraph 'Computational Expenses' of Section 2.2 is perhaps the worse from this point of view), even if it is possible to get most of the content.\\n\\nThe whole paper is based on a comparison with BP-MLL, which is quite problematic for two reasons. First, it is not clear that BP-MLL is the best suitable baseline. What motivated this choice? Their pairwise exponential loss function is not standard, most ranking methods instead choose the hinge loss). What is their architecture? We need more arguments to assess this as a strong benchmark. Besides, the comparison made in the paper between the proposed network and BP-MLL is not valid: influences of architecture, choice of transfer functions and loss are all mixed. In the end of the 'Plateaus' paragraph, it is suggested that ReLUs allow to outperform BP-MLL but no experience with BP-MLL with ReLU has been conducted. Perhaps the loss of BP-MLL (PWE) could be as efficient as cross entropy with ReLU. Is BP-MLL used as a benchmark as a network or simply as a loss (exponential)? What are the results with hinge loss?\\n\\nMost results about neural networks are not particularly new. It is already quite well known that (1) Dropout prevent overfitting and (2) networks with ReLUs are easier to optimize than those with Tanh. In that sense, Figure 3, which is quite nice and sound, or Section 5 do not bring much new results to practitioners already used to Deep Learning (such as the ICLR audience I guess). \\n\\nThe curves form Figure 2 are more original but I don't agree with its creation. What justifies that what is observed with such a weird networks (a single hidden unit) can transfer to more generic feed-forward NNs? Besides, the conclusions regarding the 'plateaus' are not really obvious: it seems that there are plateaus for all settings. Is is expected that the curves with tanh appear to be symetric w.r.t. the origin?\", \"other_comments\": [\"I don't see what means 'Revisiting Neural Networks' in the title of the paper.\", \"In Equation (2), what is the definition of $y_l$?\", \"Legend of Fig 1(b): CE w/ ReLU -> CE W/ tanh\", \"ReLU is used from the beginning of Section 2 but defined in Section 2.3.\", \"How is defined Delta for ADAGRAD?\", \"Couldn't it be interesting to try to learn thresholds using the class probabilities outputted by the network?\", \"The metrics used in experiments could be described.\", \"Discussion on the training efficiency of linear SVMs in Section 6.2 seems irrelevant here.\"]}", "{\"title\": \"review of Large-scale Multi-label Text Classification - Revisiting Neural Networks\", \"review\": \"The authors claim that a simple two-layer fully connected neural net can outperform or meet state of the art accuracies on large, multi-label classification tasks, using rectified linear units and dropout. They make novel use of L2 regression to compute a data-dependent threshold to map model outputs to class labels.\\n\\nWhile the paper is interesting, the baselines seem weak, in particular, the comparison against a neural net ranker with exponential versus cross-entropy loss. The authors make a point several times in the paper with which I disagree - that ranking approaches do not scale to large datasets. The existence of effective web search engines is strong evidence to the contrary.\\n\\n\\nSpecific comments\\n*****************\\n\\nThere are many typos and grammatical errors but few that change the meaning, so below I only mention the latter. \\n\\nEq. (2): For clarity's sake it might be worth mentioning that f_0 has range [0,1] and that the labels y are in {0,1}.\\n\\nFig. (2): Yes, but the high slope of ReLU can also cause problems if there is noise in the data (which appears as outliers).\\n\\nI think that a much better comparison than the one with BP-MLL would be to compare with RankNet, a neural net ranker that uses a cross entropy error function and thus is much closer to the cost function used in the paper. It does seem to make sense to rank all outputs corresponding to positive labels above all outputs corresponding to negative; the question left open by your comparison is whether the problem lies with the use of the exponential cost in BP-MLL. Unfortunately there are at least two confounding factors that you have not separated in comparing BP-MLL versus your method: the use of ranking at all, and the choice of ranking cost function. So I think it's important to compare against a cross entropy ranking function. \\n\\n'The ReLU disables negative activation so that the number of parameters to be learned decreases during the training.' - what do you mean?\\n\\nSection 2.3: define the Delta used in Adagrad\", \"section_3\": \"this is a nice idea, for choosing data dependent thresholds - is it novel (for multi label classification)?\\n\\nSec. 4.1: it would be useful to give brief definitions of the ranking measures used ('Rank loss, one-error, etc.'), since, except for MAP they (or at least, these names for them) are not well known.\\n\\nSec 4.2: my impression is that your choice of SVM baseline is weak (also, few details are given). Binary relevance seems like a poor choice for the multilabel task. Why not compare to SVM rankers? And since you're comparing to a nonlinear system, reporting results using nonlinear kernels would be good to be more complete (although linear SVMs usually do work well on text classification, this is a different task).\", \"table_2\": \"'B and R followed by BR are to represent' should read 'B and R following BR represent' and would be even better written as simply 'B and R subscripts represent'\\n\\nThe results in Table 2 seem to be mixed. Sometimes dropout helps, sometimes it hurts. Sometimes the SVMs win, often not. The only take-away that seems safe to me, is that if you want the best performing system, then try all these methods (and other, stronger baselines - see above) on your data, and pick the best. I think the most interesting result is that NNs (lumping with and without dropout together) beat linear SVMs, which is usually the strongest performing method for text classification. But this task is different, and so it would have made more sense to compare against SVM rankers.\\n\\nGiven the use of ReLU transfer functions, what different did L1 regularization on the hidden activations make?\\n\\nFigs 3a is great. Fig 3b is a little misleading, though - you chose the one dataset where adding dropout helped. What does that curve look like on a dataset where adding dropout hurt?\\n\\nSection 6.1: 'the presence of a specific label may suppress or exhibit the probability of presence of other labels' - I think you just mean that some sets of labels tend to occur together, and this is ignored in binary relevance methods, but please explain this more clearly.\"}", "{\"review\": \"Thanks for the helpful reviews. We are currently working on improving our paper along your suggestions. We noticed that we have not been clear enough about a few important points, which we would like to clarify in this first reply (more details will follow later):\\n\\n1. learning-to-rank vs. multi-label classification\\nAlthough we frequently talk about ranking in our paper, our objective is not learning- to-rank. Learning-to-rank aims at ranking a set of objects (such as documents), whereas our goal is to assign a set of labels to a given document. (as opposed to conventional multi-class classification, where only a single label is assigned to the document). Mutli-label classification is often framed as a ranking problem, but in this case the labels need to be ranked for a given document (as opposed to ranking the documents themselves). Many commonly used loss functions for multi-label classifi- cation focus on a good ranking of the labels (a good ranking is one where all relevant labels tend to be ranked before all irrelevant labels).\\n\\n2. The pairwise hinge loss and the pairwise exponential loss to minimize ranking loss\\nIt is said that the pairwise hinge loss in RankSVM [2] and the pairwise expo- nential loss in BP-MLL are natural choices as the surrogate loss for the ranking loss [4]. However, these surrogate losses are not consistent with the ranking loss that we want to minimize.\\n\\n3. Are BP-MLL and BR (linear SVMs) reasonable choices for the baselines?\\nNeural Network-based algorithms are particularly interesting for multilabel classifi- cation because they allow to model dependencies between the occurrence of labels, whereas the standard binary relevance approach assumes that the occurrence of a la- bel for an example is independent of the occurrence of other labels for this examples, an assumption that is typically wrong in practice. BP-MLL is a well-known NN archi- tecture which exploits a pairwise error function instead of the traditional cross entropy loss.\\nThe main claim that we want to make in this paper is that Neural-Network based approaches to multilabel classification may benefit from several recent advancements that have been developed in Deep Learning, as well as that the minimization of the pairwise error function may be replaced with something simpler such the cross entropy loss.\\nThus BP-MLL is a natural benchmark, because this is the prototypical Neural- Network-based multilabel classification algorithm, which is often used as a baseline [5].\\nBinary relevance, on the other hand, is in many domains not a strong baseline for the reasons discussed above, but in particular in text domains it is still commonly used and has shown very good results. Several recent works have shown that the BR approach may outperform their counterparts which consider label dependencies between labels. Also note that recent analyses has shown that ranking losses may be minimized by loss functions on the individual labels [1, 4], which may be part of an explanation why binary relevance with SVMs as base classifiers tends to often performs well in practice even though it does not take label dependencies into account.\\n\\nReviewer 1\\n\\n- Section 3: this is a nice idea, for choosing data dependent thresholds - is it novel (for multi-label classification)?:\\nThe basic idea of this sort of threshold has been discussed in several papers [2, 8, 6]. Instead of minimizing bipartite misclassification errors, which is sum of the number of false positives and false negatives, we use F1 score as a reference measure.\\n\\n- Given the use of ReLU transfer functions, what different did L1 regularization on the hidden activations make?:\\nWe follow the recent findings from [8] where deep neural networks are constructed by using ReLUs at the hidden layers, together with L1 regularization on the activations. Even though, as we expected, stronger L1 regularization makes averaged value of positive hidden activations decrease, we did not find meaningful differences.\\n\\n- Figs 3a is great. Fig 3b is a little misleading, though - you chose the one dataset where adding dropout helped. What does that curve look like on a dataset where adding dropout hurt?:\\nThe performance of NNs with dropout or without dropout is highly correlated with properties of datasets. When we train NNs on EUR-Lex and Delicious dataset, the networks tend to overfit severely such as the red dashed lines in Figure 3 (b). Both datasets have a relatively large number of labels compared to the number of training documents and unique terms. Precisely, on the Delicious dataset, the number of distinct labels is larger than the number of unique words, and each document is associated with nearly 20 labels on average. Additionally, one thirds of labels on EUR-Lex dataset does not appear in the training data split, that is the label distribution of training data and that of test data are different. To prevent overfitting due to such characteristics of datasets, we tried to regularize the models with L1 and L2 penalty as well as Dropout, but only works Dropout. On the rest of the datasets, even though we increase the number of units in the hidden layer up to 2000 and 4000, no such severe overfitting observed. We conjecture that this is because why dropout does not help training. In that case, otherwise, it introduces undesirable noise to the models.\\nWe will include figures for the cases where dropout does not help.\\n\\nReviewer 2\\n\\n- the comparison made in the paper between the proposed network and BP-MLL is not valid: influences of architecture, choice of transfer functions and loss are all mixed. In the end of the 'Plateaus' paragraph, it is suggested that ReLUs allow to outperform BP-MLL but no experience with BP-MLL with ReLU has been conducted. Perhaps the loss of BP-MLL (PWE) could be as efficient as cross entropy with ReLU. Is BP-MLL used as a benchmark as a network or simply as a loss (exponential)? What are the results with hinge loss?:\\nWe used BP-MLL as it has been proposed where the hidden units and the output units are tanh, and the error function is the pairwise error function. In comparison of ReLU and tanh in the hidden layer of BP-MLL, tanh often performs better than ReLU. We will add additional experimental results with respect to the type of hidden units in BP-MLL.\\n\\n- The curves form Figure 2 are more original but I don\\u2019t agree with its creation. What justifies that what is observed with such a weird networks (a single hidden unit) can transfer to more generic feed-forward NNs?:\\nIt is hard to draw error as a function of parameters in general neural networks as in Figure 2 because the number of all possible configurations of parameters in NN equals to '1 + the number of hidden layers' assuming that each hidden layer has a single unit. \\n\\n- the conclusions regarding the \\u201dplateaus\\u201d are not really obvious: it seems that there are plateaus for all settings.:\\nWe can only say given all the same settings such as input data, output targets and weight initialization methods, a curve is more steep than the other. What we want to show in Figure 2 is that the use of different cost functions yields the different curves where PWE consists of larger plateaus compared to CE. Obviously, using the ReLU unit in the hidden layer gives rise to absolutely flat region as in Figure 2(b), which implies it is impossible to escape from such a region once the hidden unit\\u2019s activation is determined to zero. That is a downside of the use of ReLUs as Reviewer 1 pointed out.\\n\\n- Couldn\\u2019t it be interesting to try to learn thresholds using the class probabilities outputted by the network?:\\nTo learn instance-wise threshold predictor, the class probabilities are used to estimate the best performing threshold on training instances from which we learn the threshold predictor. If one wants to train label-wise threshold predictor, a variant of ScutFBR might be considered [7, 3].\\n\\n\\nReferences\\n[1] Krzysztof Dembczynski, Wojciech Kotlowski, and Eyke H\\u00fcllermeier. Consistent multilabel ranking through univariate losses. In ICML, 2012.\\n[2] Andr\\u00e9 Elisseeff and Jason Weston. A kernel method for multi-labelled classification. In NIPS, 2001.\\n[3] Rong-En Fan and Chih-Jen Lin. A study on threshold selection for multi-label classification. Technical Report, National Taiwan University, 2007.\\n[4] Wei Gao and Zhi-Hua Zhou. On the consistency of multi-label learning. In COLT, 2011.\\n[5] Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. Random k-Labelsets for Multilabel Classi- fication. IEEE Trans. Knowl. Data Eng., 23(7):1079\\u20131089, 2011.\\n[6] Yiming Yang and Siddharth Gopal. Multilabel classification with meta-level features in a learning-to-rank framework. Machine Learning, pages 1\\u201322, 2011.\\n[7] Yiming Yang. A study of thresholding strategies for text categorization. In SIGIR, 2001.\\n[8] Min-Ling Zhang Min-Ling Zhang and Zhi-Hua Zhou Zhi-Hua Zhou. Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization. IEEE Trans. Knowl. Data Eng., 18(10): 1338\\u20131351, 2006.\"}", "{\"title\": \"review of Large-scale Multi-label Text Classification - Revisiting Neural Networks\", \"review\": \"The paper describes a series of experiments for multi-labeled text classification using neural networks. The classification model is a simple one hidden layer NN integrating rectifier units and dropout. A comparison of this model with baselines (Binary relevance and a ranking NN) is performed on 6 multi-labeled datasets with different characteristics.\\nThis is an experimental paper. The NN model is a classical MLP and the only new algorithmic contribution concerns the prediction of decision thresholds for binary decisions. On the other hand, the experimental comparison is extensive and allows the authors to examine the benefits of recent improvements for NNs on the task of text classification. The datasets characteristics are representative of different text classification problems (dataset and vocabulary size, label cardinality). Different loss functions are also used for the evaluation. The paper could then be useful for popularizing NNs for text classification.\\nI am not sure that the BP-MLL is a reference algorithm for learning to rank, and there are probably better candidates. On the other hand it is true that the simple binary framework (BR in the paper) is a strong baseline despite its simplicity, so that the results could be considered as significant. An extension of this work could be to consider large scale problems (both in the vocabulary size and in the number of categories, since several benchmarks are now available with a very large number of classes \\u2013 see for example the LSHTC challenges). A deeper discussion on the respective complexity of the different approaches and of their behavior when the number of training examples varies (learning curves) would strengthen the paper.\"}" ] }
Hq5MgBFOP62-X
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
[ "Michael Mathieu", "Yann LeCun", "Rob Fergus", "David Eigen", "Pierre Sermanet", "Xiang Zhang" ]
We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learnt simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013), and produced near state of the art results for the detection and classifications tasks. Finally, we release a feature extractor from our best model called OverFeat.
[ "localization", "detection", "overfeat", "integrated recognition", "convolutional networks", "integrated framework", "convolutional networks overfeat", "classification", "multiscale", "window" ]
submitted, no decision
https://openreview.net/pdf?id=Hq5MgBFOP62-X
https://openreview.net/forum?id=Hq5MgBFOP62-X
ICLR.cc/2014/conference
2014
{ "note_id": [ "AiU-7_Wwg37jx", "QV0KQRSaXWk1w", "11m21yPQdjv49", "yGNSGHgls9Irb", "yuF4yCcCBOna3" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1391813220000, 1392094080000, 1393276320000, 1391644680000, 1392859140000 ], "note_signatures": [ [ "anonymous reviewer be85" ], [ "anonymous reviewer c233" ], [ "David Eigen" ], [ "Liangliang Cao" ], [ "anonymous reviewer 4a93" ] ], "structured_content_str": [ "{\"title\": \"review of OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks\", \"review\": \"The paper presents a method for object recognition, localization and detection that uses a single convolutional neural network to locate and classify objects in images. \\u00a0The basic architecture is similar to Krizhevsky\\u2019s ImageNet 2012 system, but with modifications to apply it efficiently in a \\u201csliding window\\u201d fashion and to produce accurate bounding boxes by training a regressor to predict the precise position of the bounding box relative to the sliding window detector. \\u00a0Numerous tweaks are documented for making this system work well including: \\u00a0multi-scale evaluation over widely-spaced scales, custom shifting/pooling at the top layers to help compensate for spatial subsampling of the network, per-class regressors to localize objects relative to the input window, and a simple merging procedure to combine the regressed boxes into final detections. \\u00a0This method is shown to achieve state-of-the-art results on ImageNet localization and detection tasks while also being relatively fast.\\n\\nOverall, this paper presents a very thorough accounting of a fully functioning detection pipeline based on convnets that is the top performer on one of the toughest vision tasks around. \\u00a0One of the challenges with reporting results like this is to make them reproducible, and I think this paper includes all of the details that a researcher would need to do so, which is really excellent. \\u00a0\\n\\nThere is currently a lot of work on detection architectures (e.g., from Erhan et al.) but this one is fairly complete and high-performing. \\u00a0So, while there aren\\u2019t huge new ideas here, considering the depth of experiments and the cornucopia of tricks for maximizing performance the work looks very worthwhile.\", \"pros\": \"End-to-end training of the entire detection and localization pipeline. \\u00a0The decomposition into 3 clean stepping stones (classifier, localizer, detector) is a nice strategy.\\n\\nState-of-the-art detection performance on Image-Net.\", \"cons\": \"Somewhat \\u201cspecialized\\u201d convnet architecture to deal with subsampling issues and multi-scale (e.g., it is mentioned that the detector of Fig 11 also uses multiple scales for context)\", \"other\": \"The text is very detailed in order to make the system reproducible. \\u00a0This is great, but perhaps some of the tables and minor notes [parameter settings, etc.] could be moved to an appendix to tighten up the text.\"}", "{\"title\": \"review of OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks\", \"review\": \"The authors present a system that is capable of classification, localization and detection, to the advantage of all three. The starting point of Krizhevsky\\u2019s 2012 work is adapted to produce an architecture capable of simultaneously solving all tasks, with the feature extraction component being shared across tasks. They propose to scan the entire convolutional net across the entire image at several scales, reusing the relevant computations from overlapping regions already processed, making both classification and (per-class) bounding box predictions at each and a clever scheme for aggregating evidence for the bounding box across predictions, and lots of other tricks. They evaluate on ILSVRC 2012 and 2013 tasks with excellent results.\", \"novelty\": \"low-medium\", \"quality\": [\"high\", \"Pros\", \"This is an excellent piece of applied vision research. The results on ILSVRC speak for themselves, really.\", \"It brings to light some potentially non-obvious (to a convolutional networks neophyte) advantages of convolutional nets for tasks like this, namely that dense application\", \"Details are copiously documented: this is an excellent example of authors paying serious mind to the reproducibility of their work, a tendency that is sorely lacking in computer vision and machine learning in general. Please keep up the good work in future publications.\", \"Cons\", \"There isn\\u2019t a lot of methodological novelty, although there is some in the tricks employed to deal with subsampling, etc. (to my knowledge, these are novel). That said, it is a tour-de-force application paper, so I don\\u2019t see this as a serious drawback.\", \"The only serious barrier to publication I see is clarity of exposition in certain parts.\", \"Some discussion of not just which hyperparameters were chosen but how and why, including some rationale for departures from Krizhevsky\\u2019s architecture, would be nice (e.g. the learning rate schedule, your choice to drop contrast normalization, non-overlapping pooling regions, etc.). Providing the details as statement of fact is good, but insights into how you made some of these decisions would make for more compelling reading, especially to those familiar with the Krizhevsky work.\", \"The organization of the paper could also use work: essential ideas should be distilled into an 8ish page manuscript and the details (which, as I said, are an extremely positive feature of this paper) relegated to an appendix.\"], \"detailed_comments\": [\"If you have some rough idea of the relative importance of horizontal flipping and other tricks described in 3.3, it would be useful to know. I don\\u2019t expect an exhaustive ablative analysis, but even an informal statement as to which elements seem to be the most critical would be interesting.\", \"The exposition on \\u201cdense application\\u201d and why this is a computational savings is less clear than it could be (section 3.5). Basically what you are trying to get across, I think, is that applying a convolutional net to every PxP window of an MxN image, where P << M and P << N, can be performed efficiently by convolving each layer of filters and doing the pooling and so on with the entire image at once, reusing computation for overlapping window regions, and thus it is much more efficient than if you had some arbitrary black box that you had to apply at every window location and reuse no computation whatsoever. However the text was very unclear on this point, and a reader with less background may not understand what you mean (which would be a shame, as this is a very important point, practically speaking). I\\u2019m sure I\\u2019ve heard this idea spoken of before -- is this the first time it\\u2019s appeared in print? If not, I\\u2019d make sure to include a citation.\"]}", "{\"review\": \"We would like to thank all the reviewers for their comments and feedback. We have integrated many of the suggestions into a new version (v4) of the paper, and are continuing to make revisions. This version has been submitted to ArXiv and will appear later today, on Tue, 25 Feb 2014 01:00:00 GMT.\", \"in_response_to_your_comments\": [\"'some of the tables and minor notes ... could be moved to an appendix';\", \"'essential ideas should be distilled ... and the details ... relegated to an appendix'\", \"Thanks for these suggestions. We are currently working to factor out the details and make the paper more succinct. Some progress on this has already been made in the newest version (v4), and we are now working on another revision with further editing.\", \"'exposition on \\u201cdense application\\u201d and why this is a computational savings is less clear than it could be'\", \"'clarity of exposition in certain parts'\", \"This section has been updated in the new version (v4), and should be clearer. Many other parts of the text have also been revised, and we are continuing to make edits for this.\", \"'Some discussion of not just which hyperparameters were chosen but how and why'\", \"'rough idea of the relative importance of horizontal flipping and other tricks described in 3.3, it would be useful to know'\", \"'more detail on computational efficiency/accuracy compromise'\", \"Thanks for the suggestions. We will try to discuss some of these questions more in the next revision (v5) of the paper (this has not yet been included in v4). We do not have systematic comparisons for many of these, though, but further studies of them could make good followup work.\", \"'results on PASCAL'\", \"Thanks for the suggestion. This is likely not something we will be able to get done for this paper, but we agree it would be interesting to see, and may look at this in the future.\"]}", "{\"review\": \"It is a very nice work and I enjoy reading the paper. Now I believe the era of 'deformable part model' (by Felzenszwalb, McAllester, Ramanan et al) in CV detection will find its successor soon. We will witness another revolution in the field of object detection after talking about DPM for 5 years.\", \"one_comment_of_the_paper\": \"Is it possible to know the results of applying Overfeat to PASCAL detection dataset? I am also interested the comparison with NEC's Regionlets (No.2 place in ImageNet 2013 detection) on PASCAL.\\n\\nBut even without results on PASCAL, this paper still deserves an acceptance from any conference.\"}", "{\"title\": \"review of OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks\", \"review\": \"This paper demonstrates a convolutional architecture for simultaneously detecting and localizing ImageNet objects. It is the winner of localization task of ILSVRCS2013 competition and that by itself makes it interesting.\\n\\nThey implement a combined architecture where first 5 layers share features compute features that are used for both classification and localization tasks.\\n\\nA lot of detail goes into constructing an architecture in a way as to make network windows aligned with the objeect. They use 6 scales increased in steps between 1.08 and 1.27 (why this arbitrary choice of steps?) along with 1 pixel offsets and a skip feature connections at the top to prevent the stride from being too large.\\n\\nThis is a solid paper which summarizes significant body of work in neural network design, which makes its relevance to ICLR high.\", \"some_suggestions\": \"-- I wish the authors gave more detail on computational efficiency/accuracy compromise since that needs to be considered when running in an industrial setting. For instance, coarse vs fine stride seems to provide 1% absolute improvement, while requiring 9x more computation at the lowest level. How much does that affect total computation? This could be done by adding an extra column 'FLOPS' to Table 5.\"}" ] }
PtRd6ZOVAm7Lv
Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors
[ "Wiktor Mlynarski" ]
Complex-valued sparse coding is a data representation which employs a dictionary of two-dimensional subspaces, while imposing a sparse, factorial prior on complex amplitudes. When trained on a dataset of natural image patches, it learns phase invariant features which closely resemble receptive fields of complex cells in the visual cortex. Features trained on natural sounds however, rarely reveal phase invariance and capture other aspects of the data. This observation is a starting point of the present work. As its first contribution, it provides an analysis of natural sound statistics by means of learning sparse, complex representations of short speech intervals. Secondly, it proposes priors over the basis function set, which bias them towards phase-invariant solutions. In this way, a dictionary of complex basis functions can be learned from the data statistics, while preserving the phase invariance property. Finally, representations trained on speech sounds with and without priors are compared. Prior-based basis functions reveal performance comparable to unconstrained sparse coding, while explicitly representing phase as a temporal shift. Such representations can find applications in many perceptual and machine learning tasks.
[ "representations", "sparse", "natural sounds", "phase", "amplitude continuity priors", "dictionary", "priors", "sparse coding", "data representation", "subspaces" ]
submitted, no decision
https://openreview.net/pdf?id=PtRd6ZOVAm7Lv
https://openreview.net/forum?id=PtRd6ZOVAm7Lv
ICLR.cc/2014/conference
2014
{ "note_id": [ "qq1NUprOkGxvM", "VzhSziEbS7fbY", "PgiHEe9RnQgSt", "GdpzZnQU7qG5f", "__7eb-mkwrzUv", "3aJi3mtYiya_u" ], "note_type": [ "comment", "comment", "review", "review", "comment", "review" ], "note_created": [ 1392737640000, 1392737580000, 1391478120000, 1391729460000, 1392737520000, 1392193380000 ], "note_signatures": [ [ "Wiktor Młynarski" ], [ "Wiktor Młynarski" ], [ "anonymous reviewer 69a6" ], [ "anonymous reviewer 92c8" ], [ "Wiktor Młynarski" ], [ "anonymous reviewer 01ce" ] ], "structured_content_str": [ "{\"reply\": \"Thank you for your review and comments.\\n\\nIf one thinks about the analytic form of complex valued basis functions (such as a complex Gabor, for instance) then real and imaginary vectors indeed form Hilbert pairs. However, the sparse complex valued coding model does not make this assumption in any way. It attempts to represent the data as a linear combination of pairs of vectors which span 2-dimensional subspaces while making the amplitudes sparse and independent. No prior assumptions on the form of vectors is made. In case of natural images, the phase invariance emerges 'for free', as a reflection of the data statistics. In other signal domains it is not necessarily the case, as the present work shows.\\n\\nThe optimization algorithm seems to be working correctly. Polar and cartesian coordinates are after all equivalent data descriptions. Additionally, these results are not the first ones which observe that complex valued basis functions learned on natural sounds are not phase invariant (please, see the response to the first review). As a control experiment I have learned a complex dictionary without priors from natural images (new figure 5 C). Results do not qualitatively differ from previously obtained, which suggests that the algorithm works properly.\\n\\nThe phase slowness prior is a non-dominating penalty term, scaled by the gamma parameter, which is smaller than 1. If higher frequencies are present in the data, they will be captured, and the prior will only bias basis functions towards ones with the smoothly changing phase (of constant frequency). As the results show - this is what happens. One can interpret this prior as a penalty of variance of the temporal derivative [26]. In such interpretation it becomes more clear that all frequencies are allowed. I have added an exploanatory sentence in the text.\\n\\nRegarding the time-frequency tiling. I am not fully sure what would 'make sense' - the obtained result is a representation learned from the data. The arches reflect temporal frequency variation of the basis functions (see figure 1 B, second row - phases are monotonic, but rather piecewise linear functions of time). A possible explanation is that real and imaginary vectors tend to diverge from each other (as in the unconstrained model), but the prior forces them to stay close on the time frequency plane. I have commented on that in the new version.\\n\\nAs mentioned in the text, subspace-based models (such as complex sparse coding) learn invariances, which can not be captured by the linear models you mention. Having a phase invariant representation allows to separate amplitude and phase information, which can not be done using a linear sparse coding algorithm (at least not easily). In tasks such as sound localization, separation of these parameters is crucial. As the introduction and conclusions section discuss, learned dictionaries are adapted to the data and at the same time make explicit aspects which are not captured by simpler models. From my point of view, this is the most important gain.\"}", "{\"reply\": \"Thank you for your comments and suggestions.\\n\\nFirstly, I agree that the paper does not introduce any fundamentally novel method. What may be considered as technical novelties are:\\n\\n- the fact that priors are placed on basis functions \\n- smoothness priors are placed on both: phases and amplitudes (Cadieu et al, penalized only amplitude dynamics, not phase)\\n- an additional term in the phase penalty, which enforces it's monotonicity.\\n\\nThe purpouse of the paper was not, however to introduce a novel method - it was to learn representations of a certain class of signals (natural sounds) and to study the properties of obtained features. Such representations may find applications in tasks which operate on sound data.\\nThat is why I do not think that the present paper should be directly compared with the hierarchical model introduced by Cadieu et al. especially in the context of a method novelty. Cadieu and Olshausen, constructed a hierarchical representation of natural videos with a purpouse of extracting motion invariances. The present paper learns single layer representations of natural sounds - this is a fundamental difference.\\n\\nIt is an interesting question, in which (statistical) sense sounds are different from images (please, see my response to the previous review). After all, physcially they are very different stimuli. Suggested by the results presented in [21, 22] I have introduced a brief analysis of harmonic relationships between real and imaginary vectors. A full answer to that problem requires extensive research.\", \"i_have_performed_two_kinds_of_quantitative_analysis\": \"denoising and comparison of coefficient entropies. The analysis was performed as in cited literature [14, 28]. Due to the space constraints I have not presented more details. Personally, I find presented results conclusive. The work by Karklin et al you cite analyzed the learned representation also by performing a denoising task. I do not fully understand, how should I compare the present study to their results, as you suggest.\\n\\nI understand that by compression experiment you mean the comparison of coefficient entropies. Of course, a trivial solution would be a basis set which yields 0 entropies, while not being able to reconstruct the data at all (an 'infinite' reconstruction error). As the denoising experiment shows, this is not the case for any of the learned bases. Entropy estimates give therefore an idea of a relative coding cost, and according to Shannon's source coding theorem, the model yielding the lowest coding cost is closest to the true data distribution. For a detailed discussion please refer to [14, 15]. The entropy values should be considered together with the denoising performance.\\n\\nBounding the phase derivative from below is also a possible way to enforce the phase monotonicity. However, it would require introduction of another parameter - the bound itself. The proposed prior does not require any additional parametrization. I have modified the description of equation 9 to address your suggestion.\\n\\nFor convenience, gamma and beta lie in the [0, 1] interval. The gradient moduli (before multiplying by gamma, beta and the step size) can be much larger than 1. In such a case, prior strength parameters affect the gradient step very weakly. That is why gradient terms are firstly normalized to have the same length, and then multiplied by gamma and beta.\"}", "{\"title\": \"review of Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors\", \"review\": \"The paper describes a sparse coding model with complex valued basis functions. For training, it proposes to minimize reconstruction error plus penalty terms that encourage the amplitudes and phases of the basis functions to be smooth.\\n\\nAt first sight the model seems reminiscent of Cadieu, Olshausen (2012) [4]. But in that work, it is the coefficients that are penalized to have smooth amplitudes over time, whereas here, it is the basis functions themselves that are penalized to be smooth.\\n\\nThe model is applied to time-domain speech signals (one-dimensional data). The paper compares the results of complex valued sparse coding with smoothness penalties versus complex valued sparse coding without. The comparison shows that with penalties, basis functions seem to be more localized and filters within a pair tend to have quadrature relations.\\n\\nWithout penalties they do not seem to. I find this somewhat surprising because I would have thought that minimizing reconstruction error (plus orthonormalizing filters within each pair as suggested) would already achieve this, like it does in the case of images. The paper does suggest that sound data is fundamentally different from image data. I am curious what it is about sound data that causes it to require this extra machinery for learning complex basis functions. It would be very good to have actual results on images as a control. This would also help disentangle two topics that are hard to separate in the paper, which are 1) fundamental differences in sound data versus image data, and 2) learning complex bases with and without smoothness penalties.\\n\\nI am wondering in what way the smoothness penalties are related to weight decay, or in what way they may just help find better local optima. It seems like this would be easy to check by initializing model A (no penalties) with model B (with penalties).\\n\\nThe title says 'natural sounds' but as far as I can tell, all experiments were done on a speech dataset. I'm not sure I completely agree with the statement that speech is a good enough proxy for natural sounds in general.\\n\\nThere are a lot of typos (e.g., 'Gramm-Schmidt', 'strucutre', 'analyzis').\"}", "{\"title\": \"review of Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors\", \"review\": \"A sparse coding model of natural sounds (speech) is proposed. The signal is represented by a complex sparse coding problem with smoothness priors on both amplitude and phase. Learning and inference proceeds as in standard sparse coding. The method is analyzed in terms of statistics of complex pairs filters as well as denoising.\\n\\nThe method is not very novel. Complex sparse coding was already introduced in the past and the sparsity priors on the amplitudes and coefficients are a straightforward extension (or simplification compared to the work by Cadieu et al.).\", \"pros\": [\"interesting application\", \"fairly clear written paper\"], \"cons\": [\"insights may be good but I probably did not fully understood them. Why are sounds inherently different from images? Is it an artifact of how the experimental set up? Without sparsity/smoothness constraints, the problem is clearly underdetermined and therefore filters do not necessarily converge to quadrature pairs.\", \"what is the contribution of this work compared to Cadieu et al? They had an extra layer, but the basic idea of smoothness of phase and amplitude is present also in that work.\", \"empirical validation is not sufficient because:\", \"more quantitative results would be beneficial to assess the benefits of this model. For instance, the authors may want to compare and cite:\", \"Y. Karklin, C. Ekanadham, and E. P. Simoncelli, Hierarchical spike coding of sound, Adv in Neural Information Processing Systems (NIPS), 2012\", \"some parts need clarification\", \"eq. 9: why is this a good choice? wouldn\\u2019t it be better to have it bounded below?\", \"sec. 2.2 why rescaling the gradients when there are beta and gamma?\", \"in the compression experiment, shouldn\\u2019t the reconstruction error be taken into account?\", \"Overall, this is interesting work. However, several clarifications are required in order to better assess novelty and to understand the method. Also, the empirical validation should be strengthened.\"]}", "{\"reply\": \"Thank you for your review and interesting comments. In a new paper version I have introduced suggested modifications and related to points you make.\\n\\nAs you mention, the results of training complex-valued sparse codes on natural sounds can be unexpected if one keeps the intuition from natural image statistics. One of the main messages of the paper is that this intuition does not necessarily translate between signal domains. This perhaps should not be very surprising, after all those signals arise as a result of fundamentally different physical processes. This is not the first paper which observes that same statistical models trained on natural sounds and images yield different results [22, 25] (please note that I use literature indexing according to the newest version of the paper). \\n\\nIt has been suggested that statistical models (such as topographic ICA, which is closely related to complex valued sparse coding) capture non-local cross-frequency correlations of natural sounds [21, 22]. Correlations of natural image patches are local, and that is why dictionaries trained on those two signals reveal very different structure. In the new version of the paper, I have included analysis of harmonic relationships between peaks of the basis function spectra. This may be an initial explanation why learned basis functions are not phase invariant and have different frequency peaks in their real and imaginary parts.\\n\\nIt would be hard to perform the comparison of results obtained on images and sounds. Priors introduced here are defined over one-dimensional temporal domain and are placed on basis functions. When learning natural image representations, basis functions capture spatial, not temporal relationships. Additionally they are two-dimensional, therefore the 'temporal slowness' penalty should be transformed into 'spatial smoothness' penalty and the relationship between those is non-obvious. Differences between images and sounds are, however not a fundamental focus of the current paper - it is the learning of sound representations. As a control experiment, I ran the unconstrained algorithm also on natural images. Resulting exemplary basis functions are now depicted on figure 5 C.\\n\\nNon-penalized basis functions form most probably a more efficient representation of the data. This can be inferred by looking at their performance in a denoising task and coefficient entropies. I have performed the experiment you suggested (using penalized basis as initial conditions for non-penalized learning) and included the results in the new version. After 30000 iterations of learning without smoothness priors, phase invariant basis functions deviate from the quadrature pair form (see new figure 5 A and B). This suggests that quadrature solutions do not constitute better local optima than unconstrained ones. As an additional control experiment I have learned a complex dictionary without priors from natural images (new figure 5 C). Results do not qualitatively differ from previously obtained.\\n\\nI also understand your concern regarding the use of speech as a proxy for general natural sounds. As long as speech does not include all possible acoustic structures present in the auditory environment, it contains both harmonic and non-harmonic features. Speech has been used before as a natural sound representation [1,5,13,19] and for those reasons I decided to used it here. Results obtained using different classes of natural sounds yield qualitatitvely similar results, and they were not included for simplicity and because of space constraints.\\n\\nI have also corrected a number of typos in the new version.\"}", "{\"title\": \"review of Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors\", \"review\": \"This paper shows that imposing a prior over the basis functions in a complex representation of sound results in bases that are closer to hilbert pairs, with smooth amplitude envelope and linear phase precession.\\n\\nIt is not clear why imposing the prior directly on the basis functions is necessary. If you think of the complex pair as a phase-shiftable basis function, then it would make sense for the real and imaginary parts to be related by hilbert transform. It makes me wonder whether the optimization was done correctly in inferring the sparse amplitudes - I.e., the phase must be allowed to steer to the optimal position, yielding a sparse representation. It appears the gradients were computed with respect to the real and imaginary parts of the coefficients, rather than the amplitude and phase, which may be why the phase is not being properly inferred.\\n\\nThe slowness prior on the phase doesn't make sense - this would bias the bases toward low frequencies, no? Some comment seems warranted.\\n\\nThe learned tiling in time-frequency doesn't make much sense. What is causing the arching pattern? It's not clear.\\n\\nMost of all, it's not clear what we gain from this representation beyond previous attempts to learn a sparse representation of sound (Smith & Lewicki). It would have been nice to compare coding efficiency and so forth against a purely real (vs. complex) representation.\"}" ] }
7Y52YHDS2X7ae
Zero-Shot Learning by Convex Combination of Semantic Embeddings
[ "Tomas Mikolov", "Andrea Frome", "Samy Bengio", "Jonathon Shlens", "Yoram Singer", "Greg S. Corrado", "Jeffrey Dean", "Mohammad Norouzi" ]
Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the semantic embedding space is trained jointly with the image transformation, while in other cases the semantic embedding space is established independently by a separate natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional n-way classification framing of image understanding, particularly in terms of the promise of zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. Here we propose a simple method for constructing an image embedding system from any existing n-way image classifier and any semantic word embedding model, which contains the n class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional learning. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.
[ "learning", "convex combination", "semantic embedding space", "images", "cases", "image transformation", "image", "advantages", "simple", "semantic embeddings" ]
submitted, no decision
https://openreview.net/pdf?id=7Y52YHDS2X7ae
https://openreview.net/forum?id=7Y52YHDS2X7ae
ICLR.cc/2014/conference
2014
{ "note_id": [ "LyRby_-q2onfK", "44qhlhKQh31nZ", "R2JU265o6CRLV", "FWp1FD6f1Qa45", "KkSQkxO6x4jcH" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1392558540000, 1391687340000, 1392853980000, 1391734380000, 1392310800000 ], "note_signatures": [ [ "anonymous reviewer 8598" ], [ "anonymous reviewer 936e" ], [ "Mohammad Norouzi" ], [ "Sumit Chopra" ], [ "anonymous reviewer 06d4" ] ], "structured_content_str": [ "{\"title\": \"review of Zero-Shot Learning by Convex Combination of Semantic Embeddings\", \"review\": \"This paper presents a simple but really neat idea of combining semantic word vectors trained on text with a softmax classifier's output.\\nInstead of taking the softmax output as is, it uses its probabilities to weigh the semantic vectors of all the classes which allows the model to assign labels that were not present in the training data.\\n\\nThe results are not always better than previous work from the group but in many settings they are.\\n\\nSimple but overall very cool idea.\"}", "{\"title\": \"review of Zero-Shot Learning by Convex Combination of Semantic Embeddings\", \"review\": \"This paper addresses the problem of zero shot learning for image classification. Like in their recent NIPS2013 work, the authors relies on an embedding representation of the classes inferred from a language model. Their prediction scheme first predicts an embedding vector which linearly combines the vectors representing the n-best prediction among the training classes and then looks for the nearest neighbors of the predicted vector among the test class embedding.\\n\\nThe paper reads well and appropriate reference to related work is given. The proposed approach is very simple and yet improve over the DEVISE classifier. I have however a concern regarding the results which either indicates (i) the implementation differs from the paper description, (ii) I missed something. It seems to me that hit@1 for ConSE(1) predicts the embedding of the best prediction of the 'Softmax baseline' over the 1,000 training classes, i.e. f(x) = s(y_0(x,t)) and outputs its nearest neighbor in the search space. When the search space include the training classes, it should output y_0(x,t). This implies that in table 4, the first column should contain hit@1 for ConSE(1) = 55.6. Similarly, the result of hit@1 for ConSE(1) in the (+1K) results should be 0. This is not the case, could you explain/correct?\\n\\nApart from this technicality, this is a good paper. The approach is simple and improves the state of the art. It might be further improved by a deeper analysis of the errors, possibly grouping them according by type of classes, looking at the accuracy of the convnet on the classes related to the labels or the quality of the corresponding text embeddings.\"}", "{\"review\": \"We thank reviewers for their valuable feedback. We will prepare a new version of the paper shortly to address your comments.\", \"r1\": \"... concern regarding the results which either indicates (i) the implementation differs from the paper description, (ii) I missed something ...\", \"r2\": \"... how significant are the results. ... perhaps ConSE outperforms DeVISE, but performance are very low ... Can we consider that such poor performance is still actually meaningful and useful in some way?\\n\\nAlthough the performances do not look great (5% on hits@10), they are somewhat much better in reality. Humans are robust to minor mistakes between very similar concepts (e.g., multiple types of sea lion, or types of hamster), but our flat metric is not good at rating minor mistakes higher than major errors. If the model predicts 'Australian sea lion' but the correct label is 'Steller sea lion', then we get a score of zero according to the flat metric. Figure 1 shows some actual predictions of the model, which demonstrates that even when we fail to return the right answer (most of the time), we often return very reasonable labels. So, if the question is whether this is useful for applications, the answer is clearly we are moving in that direction.\", \"sumit_chopra\": \"... simple weighted averages of the embeddings of a collection of words is a by-product of the linear model used to train the word embeddings ...\\n\\nWe do not argue that any word embedding model used within our framework is equally effective. However, our observation is that we did not fine-tune our algorithm for any specific word embedding representation. This suggests that our algorithm is relatively robust to the specific choice of the word embedding representation, and in fact, we obtained some promising results on the use of a very different word embedding model within exactly the same model. That said, we agree that the topology of the manifold of the word embedding vectors matters, and some word embedding representations might be more suitable for our framework than others. We will reword the paper to clarify that we claim some degree of robustness, and not invariance against the choice of the language model.\"}", "{\"review\": \"A very interesting paper that proposes an extremely simple technique of mapping words and images to the same latent space for facilitating zero-shot learning. The paper shows how such a simple technique beats the recently proposed and more complicated DeVISE model. A few minor points though:\\n\\n1. As discussed by the above reader, ConSE(1) is exactly the same as Softmax baseline, especially when we include the 1K classes from the training set. The results table seem to suggest something else. What's going on? \\n\\n2. The evaluation based on excluding and including the 1K classes from the training set seems to be a bit artificial. In the real deployment scenario one does not have access to this information and hence by default one should have the classes from training set as part of your evaluation classes. And that is the true performance of ConSE(*). \\n\\n3. Lastly, I don't really buy the author's argument that the proposed model is general and independent of how the word embeddings or the image features are generated. While I agree with the image part (that the model is independent of how the image features are generated), the same is not true for the word embeddings. My sense is that one is able to meaningfully take simple weighted averages of the embeddings of a collection of words is a by-product of the linear model used to train the word embeddings. If one was to use a non-linear model (NN for instance) so that the embeddings lie on a non-linear manifold, things might not work as well. Any thoughts? \\n\\nOtherwise, a very nice paper. Well written too.\"}", "{\"title\": \"review of Zero-Shot Learning by Convex Combination of Semantic Embeddings\", \"review\": \"This paper proposes a method for performing zero-shot learning of an image labeling system. The proposed method is very simple and yet general and efficient: it consistently outperforms the DeVISE system presented recently on the ImageNet benchmark.\\n\\nThe paper is nicely written and ConSE is actually so simple that the paper is not too complicated too understand anyway. But I have no problem accepting a paper even if the method is simple, if it proves to be efficient. ConSE appears to be but some questions/comments remain.\\n\\nThe method is simple so I would have expected more studies/experiments/discussions to explain its good performance. A simple intuition is given is Section 4.1 with the (funny) 'liger' example. But this could perhaps detailed more. For instance:\\n\\n- it is claimed several times that ConSE can be used with any semantic embeddings. Is this really true? According to the 'liger' example, ConSE works because s(liger) ~ 0.5*s(tiger) + 0.5*s(lion). It is true for the skip-grams embeddings, since it has been shown that such linear relationships (and translations) were existing among those embeddings. I'm not sure that one can claim that all word embeddings work the same way and have such linear relationships. Without such property within the embedding space, would ConSE still perform well?\\n\\n- how important is it to normalize the T top probabilities of the combination? Doing so, they are implicitly calibrated on the train labels, whereas one would like better to calibrate them on train + test labels. In the conclusion, there is an interesting comment regarding the norm of the convex embedding combination, especially when probabilities are not normalized, indicating that it gives a measure of the confidence of the prediction. I feel like the main point of the paper might be there but the paper does not exploit it well. Basically, one of the most difficult problem in zero-shot learning is to detect whether to choose a label among training labels or test labels (before even trying to choose the right one). That's why for me the most interesting (and realistic) experiments of the paper are when train labels are also added to the candidate label sets (+1K setting). These show that the bias towards training labels is big, especially at top-1 (this is not surprising). The intuition about the norm of the convex combination and its connection to confidence seems to be promising to soften this bias, but this is just sketched unfortunately.\\n\\nI wonder why the performance of ConSE(1) in hits@1 is not 0 in the +1K setting. If I understand correctly, the output of ConsE(1) is simply the embedding of the top-predicted train label and hence, the closest according to the cosine distance should be this very train label. Since, no test example is labeled with a train label, it should always be a mistake.\\n\\nOn a more general point, it could also be discussed how significant are the results. I mean, perhaps ConSE outperforms DeVISE, but performance are very low (5% of hits@10 in the most general setting). Can we consider that such poor performance is still actually meaningful and useful in some way?\", \"minor\": [\"Tables 1 & 4 are in %, whereas tables 2 & 3. It should be consistent.\"]}" ] }
4diyarNwq84_Q
Can recursive neural tensor networks learn logical reasoning?
[ "Samuel R. Bowman" ]
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of 'some animal walks' from 'some dog walks' or 'some cat walks,' given that dogs and cats are animals. The results are promising for the ability of these models to capture logical reasoning, but the model tested here appears to learn representations that are quite specific to the templatic structures of the problems seen in training, and that generalize beyond them only to a limited degree.
[ "logical reasoning", "neural tensor networks", "ability", "accompanying vector representations", "words", "success", "array", "sophisticated tasks", "nothing" ]
submitted, no decision
https://openreview.net/pdf?id=4diyarNwq84_Q
https://openreview.net/forum?id=4diyarNwq84_Q
ICLR.cc/2014/conference
2014
{ "note_id": [ "QQAGQEf1wi5bW", "h2PHdAgaU72jV", "O1lo-X7Di_-hu", "BBE-xcPXoDBlw", "-cDBcO5EQw7xT", "aGu4aBlHdVN-C", "XjeCUHWZr1Um5", "30Yg49nHOtQfr", "al48JYvqPDJr_" ], "note_type": [ "comment", "review", "review", "review", "review", "comment", "review", "review", "review" ], "note_created": [ 1391838540000, 1392518100000, 1391722020000, 1391555160000, 1392446640000, 1391838420000, 1391787840000, 1391234280000, 1387867380000 ], "note_signatures": [ [ "Sam Bowman" ], [ "Sam Bowman" ], [ "anonymous reviewer 7747" ], [ "Sam Bowman" ], [ "Sam Bowman" ], [ "Sam Bowman" ], [ "anonymous reviewer e76d" ], [ "anonymous reviewer 8e44" ], [ "Sam Bowman" ] ], "structured_content_str": [ "{\"reply\": \"Thanks for your comment. I absolutely intend for this paper to describe a reproducible result, and I would hope that the citations and provided code would clarify any details that were omitted in the text. I would appreciate it if you could let me know what details you found unclear. If your concerns are centered on the random noise in the results, and the issues related to early stopping, I do see that as a real issue. I am working to find a way to either encourage the model to converge more reliably, or else to at least report statistics over its behavior across runs.\\n\\nThe paper does contain some negative results as you suggest\\u2014the model was only successful at some parts of the task\\u2014and I would like to explore those results as fully as possible. Is there anything in particular about the reporting of these results that you think could be clearer or more thorough?\"}", "{\"review\": \"While the arXiv paper is being held in the queue before publication, you can view the revised paper using this temporary link:\", \"http\": \"//www.stanford.edu/~sbowman/arxiv_submission.pdf\"}", "{\"title\": \"review of Can recursive neural tensor networks learn logical reasoning?\", \"review\": \"The paper tries to determine whether representations constructed with recursive embeddings can be used to support simple reasoning operations. The essential idea is to train an additional comparison layer that takes the representations of two sentences and produces an output that describes the relation between the two sentences (entailment, equivalence, etc.) This approach is in fact closely related to the 'restricted entailment operator' suggested near the end of Bottou's white paper http://arxiv.org/pdf/1312.6192v3.pdf. Experiments are carried out using a vastly simplified language and Socher's supervised training technique. According to the author, the results are a mixed bag. On the one hand, the system can learn to reason on sentences whose structure matches that of the training sentences. On the other hand, performance quickly degrades when using sentences whose structure did not appear in the training set.\\n\\nMy reading of these results is much more pessimistic. I find completely unsurprising that the system can learn to 'reason' on sentences with known structure. On the other hand, the inability of the system to reason on sentences with new structure indicates that the recursive embedding network did not perform what was expected. The key of the recursive structure is to share weights across all applications of the grouping layer. This weight sharing was obviously insufficient to induce a bias that helps the system generalize to other structures. Whether this is a simple optimization issue or a more fundamental problem remains to be determined.\\n\\nMy understanding is that the author always trains the system using the correct parsing structure in a manner similar to Socher's initial work (please confirm). It would be very interesting to investigate whether one obtains substantially different results if one trains the system using incorrect parsing structures (either a random structures or a left-to-right structure). Worse results would indicate that the structure of the recursive embeddings matters. Similar results would confirm the results reported in http://arxiv.org/abs/1301.2811 and strongly suggest that recursive embeddings do not live up to expectations. This would of course be a negative results, but negative results are sometimes more informative than mixed bags (and in my opinion very worth publishing.)\"}", "{\"review\": \"Thanks for your comments. I am updating the paper now with some clarifications and typo repairs, and I'm in the process of setting up a few follow up experiments.\", \"section_2\": \"Thanks for pointing out the unclear bits here, especially that \\u201csome dogs bark\\u201d example, which I seem to have broken during some hasty final revisions. I'll post an updated version with this fixed shortly.\", \"to_clarify_some_details_here\": [\"\\u201cSome' is, in fact, upward monotone in both arguments.\", \"D is the domain of containing all possible objects of the type being compared.\", \"The \\u201c^\\u201d symbol in column three was typeset incorrectly, and is meant to represent logical AND.\"], \"section_3\": \"I did not try initializing the vectors with those used in any previous experiments (e.g. Socher\\u2019s or Mikolov\\u2019s). While that kind of initialization sounds promising in general, I think that the unambiguous fragment of English that I use is so different from ordinary English usage that it is unlikely that outside information from these sources would be helpful to the task. The pretraining settings that I experimented with involved first training the model on some or all of the pairs of individual words from Appendix B, annotated with the relations between them.\\n\\nI'm certainly sensitive to the concern that the model might be overparameterized, and I will see about getting a training curve together in the next week or two.\", \"section_4\": \"The numbers referenced in those subsections do refer to Table 2, but the '(as in 2)' reference is a mistake. Example 2 in Table 2 corresponds to 'Monotonicity with quanti\\ufb01er substitution.\\u201d Thanks for catching that, and expect a fix soon.\", \"section_5\": \"I agree that the all-split result is unsurprising, though I think it is useful as a sanity check to ensure that the model structure is usable for the task, and that the model isn't dramatically *under*parameterized.\", \"the_three_target_datasets_were_chosen_by_hand\": \"the choice of a fairly small number was necessary due to resource constraints, but the choices were arbitrary. I chose to focus on quantifier substation datasets so as to render the three settings (the last three columns of Table 4) most easily comparable across the three target datasets.\\n\\nThe reference to 'potentially other similar datasets\\u201d could have been better put, but it refers to the fact that in each of the three experimental settings reported in Table 4, different criteria are used to decide which datasets are held out in training, and that all of these criteria involve how similar a given dataset is to the target dataset.\\n\\nYou raise an important point about reproducibility. I would appreciate any suggestions about better ways to report results given the fluctuations during training. I may try to report statistics over the model\\u2019s performance over a range of iterations, or statistics over the model\\u2019s performance at a given iteration over several random re-initializations, and I am experimenting further with different ways of encouraging the model to converge.\", \"i_would_like_to_suggest_that_even_the_current_results_do_show_a_broader_reproducible_pattern\": \"high performance on SET-OUT and SUBCL.-OUT is possible but subject to instabilities in the training algorithm, whereas high performance on PAIR-OUT cannot be demonstrated with this model as configured.\"}", "{\"review\": \"I have a fairly major unexpected update to report. In attempting to respond to 8e44's concerns about using early stopping to get around convergence issues, I discovered a mistake in my implementation of AdaGrad. In short, fixing that mistake led to much more consistent convergence and better results, including strong performance on the PAIR-OUT test settings, suggesting that the model is much more capable than I had previously suggested at generalizing to unseen reasoning patterns.\\n\\nI realize that this is somewhat late in the review process to make substantial changes, but a new version of the paper is pending on ArXiv and should be live by Monday. The results table and the discussion section have been replaced. I will also be updating the source code linked to above and in the paper before Monday to reflect this bug fix, and a couple of small improvements to the way that cost and test error is reported during training.\", \"if_you_are_interested_in_what_went_wrong\": \"I accidentally set up SGD with AdaGrad in such a way that it the sum of squared gradients after every full pass of the data, equivalent to every few hundred gradient updates. Since this sum is used to limit the size of the gradient updates, resetting it this often prevented the model from reliably converging without hurting gradient accuracy or preventing it from converging occasionally.\"}", "{\"reply\": \"I agree that the results so far are not as strongly positive or negative as would be ideal, and I hope to be able to report somewhat more conclusive results about the behavior of the optimization techniques that I use (see comments above), but I think that the results presented so far are informative about the ability of models like this to do RTE more generally. The SET-OUT results show that the model is able to learn to identify the difference between two unseen sentences and, if that difference has been seen before, return a consistent label that corresponds to that difference. Perhaps more important is the fact that the model shows 100% accuracy on unseen examples like 'some dog bark [entails] some animal bark\\u201d (seen in ALL-SPLIT for example) where lexical items differ between sides. Here the model is both learning to do this reasoning about differences, and learning to use information about entailment between lexical items (animal > dog) in novel environments.\\n\\nAs you suggest, I do use correct hand-assigned parses in both training and testing. I agree that it would be interesting to see what effect using randomly assigned parses instead would have, and I may be able to get those numbers at least by the conference date. It does seem worth mentioning, though, that the sentences are mostly three or four words long, so I would expect that the parse structure would be far less important in these experiments than in ones with longer sentences (and thus more deeply nested tree structures), since every word is already quite close to the top of the composition tree regardless of the structure here.\\n\\nSince you brought up the (important) Scheible and Schuetze paper, I should mention that the prior motivation for using high quality parse structures for this task is considerably stronger than the motivation for using them in binary sentiment tasks like the one reported on in that paper. In binary sentiment labeling, the label is largely (but not entirely) dependent on the presence or absence of strongly sentiment expressing words, and decent performance (~80%) can be achieved using simple regression models with bigram or even unigram features. I don\\u2019t have exactly comparable numbers for the dataset that I present in this paper, but RTE/NLI does not lend itself to comparable quality baselines with simple features. My task is deliberately easier than the RTE challenge datasets, but the average tuned model submitted to the first RTE workshop in 2005 got less than 55% accuracy on *binary* entailment classification. There is some related discussion in the review thread for the Scheible and Schuetze paper: http://openreview.net/document/e2ffbffb-ba93-43d0-9102-f3e756e3f63c\\n\\nThanks for the Bottou comparison, by the way. This does seem to me to be implementation of a slightly generalized version of his proposed restricted entailment operator, and I had not previously noticed that parallel.\"}", "{\"title\": \"review of Can recursive neural tensor networks learn logical reasoning?\", \"review\": \"This paper investigates the use of a recurrent model for logical\\nreasoning in short sentences. An important part of the paper is\\ndedicated to the description on the task and the way the author\\nsimplifies the task of MacCartney to keep only entailment relations\\nthat are non ambiguous. For the model, a simple recurrent tensor (from\\nSocher's work) network is used.\\n\\nWhile the more general task defined by MacCartney is well described,\\nthe reduced task addressed in this paper is more unclear. The\", \"motivation_stands\": \"this is a great idea to reduce the task to non\\nambiguous cases, for which we could better interpret the experimental\\nresults. However, at the end, it is difficult to draw relevant\\nconclusion from the experiments, and a lot of technical details are\\nmissing to yield the results reproducible. Maybe the author tried to\\nlessen the negative aspects of the results, but it would be really\\nmore interesting to clearly describe negative results. My opinion is\\nthat this paper is not well suited for the conference track, and maybe\\nit should be submitted to the workshop track.\"}", "{\"title\": \"review of Can recursive neural tensor networks learn logical reasoning?\", \"review\": \"In this work the author investigates how effective the vector representation of words is for the task of logical inference. A set of seven entailment relations from MaCartney are used, and a data set of 12,000 logical statements (of pairs of sentences) are generated from these relations and from 41 predicate tokens. The task is multiclass classification, where given two sentences, the system must output the correct relation between them. A simple recursive tensor network is used. The study is limited to considering quantifiers like 'some' and 'all', which have clear monotonicity properties. Results show that the model can learn but that generalization is limited. Unfortunately because the training process converges to an inferior model, results are given after very early stopping, in which subsequent iterations can give widely different results.\\n\\nThis is an exciting direction for research and it's great to see it being tackled. Unfortunately, however, the paper is unclear in crucial places, and the training methodology is questionable. I would encourage the author to clarify the paper (especially for the likely non-linguist audience) and strengthen the training algorithm (in order to demonstrate usefully reproducible results). Even if the results remain negative, this would then still be of significant value to the community.\", \"specific_comments\": \"Section 2\\n---------\\n\\nYour example of 'some dogs bark' seems confused. For both arguments of 'some' (not just the first), the inference works if the argument is replaced by something more general. You write that 'some' is downward monotonic in its second argument, but your examples show upward monotonicity in both. (Specifically, you write 'The quantifier 'some' is upward monotone in its first argument because it permits substitution of more general terms, and downward monotone in its second argument because it permits the substitution of more specific terms.' - but in the same paragraph you also write that 'some' is upward monotonic in both arguments.) Readers who are asked to expend mental energy on disentangling unnecessary confusions like this can quickly lose motivation.\", \"table_1\": \"this table is central to your work, but it needs more explanation. What is calligraphic D? You seem to be using the 'hat' operator in two different senses (column 2 versus column 3). What does 'else' mean in column 3 - how exactly is independence defined?\\n\\nThe whole paper rests on MacCartney's framework, so I think it's necessary to explain more about this scheme here. In particular, I do not understand your 'no animals bark | some dogs bark' example (and I fear most others won't, too).\", \"typo\": \"'the it is'\\n\\n'I choose one of three target datasets' - how did you choose the three? (From the 200?)\\n\\n'potentially other similar datasets...' is imprecise. How did you choose?\\n\\n'The model did not converge well for any of these experiments: convergence can take hundreds or thousands of passes through the data, and the performance of the model at convergence on test data was generally worse than its performance during the first hundred or so iterations. To sidestep this problem somewhat, I report results here for the models learned after 64 passes through the data.' I'm afraid that this greatly reduces the value of these results (they are close to being irreproducible). The training algorithm should at least converge, or be more reproducible than this. (If the test error is still fluctuating wildly on the stopping iteration, other small changes, e.g. in the data, may give completely different results).\\n\\nSection 6\\n---------\\n\\n'Pessimistically... Optimistically... ' this is speculation (neither is supported by the experiments) and so I don't think it adds much value.\"}", "{\"review\": \"Source code and data are available here: http://goo.gl/PSyF5u\\n\\nI'll be updating the paper shortly to add a link to the text.\"}" ] }
pAi8PkmKuJPvU
Nonparametric Weight Initialization of Neural Networks via Integral Representation
[ "Sho Sonoda", "Noboru Murata" ]
A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.
[ "hidden parameters", "neural networks", "integral representation", "neural network", "backpropagation", "nonparametric weight initialization", "new initialization", "nonparametric probability distribution", "proposal" ]
submitted, no decision
https://openreview.net/pdf?id=pAi8PkmKuJPvU
https://openreview.net/forum?id=pAi8PkmKuJPvU
ICLR.cc/2014/conference
2014
{ "note_id": [ "fMpXedxjhIfFo", "W_KH_Oaqtx_iI", "VVPzN959oZ6un", "vf_Vs2hRaxUEP", "07ToXaIwBiYKp", "FNE-gqxSuzFWo", "lFiWFX2hzO3DM" ], "note_type": [ "review", "comment", "comment", "comment", "review", "review", "review" ], "note_created": [ 1391460420000, 1392718380000, 1392720120000, 1392700920000, 1391926320000, 1392700980000, 1391811900000 ], "note_signatures": [ [ "anonymous reviewer 0a0f" ], [ "園田翔" ], [ "園田翔" ], [ "園田翔" ], [ "anonymous reviewer a97a" ], [ "園田翔" ], [ "anonymous reviewer 2ac8" ] ], "structured_content_str": [ "{\"title\": \"review of Nonparametric Weight Initialization of Neural Networks via Integral Representation\", \"review\": \"The paper proposes a new pre-training scheme for neural networks which are to be fine-tuned later with back propagation.\\n\\nThis pre-training scheme is done in two steps\\n 1 - sampling the parameters of the first layer using Importance sampling or an accept-reject MCMC method (both methods are apparently confused by the authors) in a data dependent way.\\n 2 - train the parameters of the output layer using linear regression.\\n\\nThe experiments compare the test RMSE/error-rate obtained using traditional back propagation and that obtained using the proposed method, on three datasets: a 1D function, a Boolean function and the Mnist dataset.\\n\\nThe proposed pre-training scheme is new but the scientific quality of the paper is questionable. First the proposed method is given a misleading name since it proposes to do the initialization in a data dependent way (with a linear regression step). This may be understood as a 'pre-training scheme', not as an 'initialization'. Second, the paper is very misleading in its report of previous work, for instance stating that (Efficient Backprop, Le Cun 1998) proposes to initialize neural networks by sampling from a uniform distribution [-1/sqrt(fan-in);1/sqrt(fan-in)] when it suggests in fact to sample from a normal distribution of mean zero and standard deviation sigma=1/sqrt(fan-in).\\n\\nAdditionally, the experiments are again very misleading. First, the main claim of the paper is that using the proposed pre-training scheme, BP will converge faster. However, the time to convergence is reported in terms of the number of BP iterations and does not take the pre-training time into account. This is especially worrisome since the pre-training scheme relies on MCMC sampling which is usually very computationally expensive compared to back propagation. Finally, the results reported on the Mnist dataset are inconsistent with previous work when then give a test error rate for back-propagation and 300 hidden units around 90% when it should be around 1.6% (cf. Mnist dataset website).\", \"pros\": \"\", \"cons\": [\"Misleading summary of previous work.\", \"Misleading reference to an initialization strategy which is in fact a data dependent pre-training step.\", \"Experiments do not report the pre-training time and are therefore strongly biased in favor of the proposed method.\", \"Results on Mnist are inconsistent with previous work.\"]}", "{\"reply\": \"Dear Reviewer 2ac8\\nThanks for your constructive comments.\\nI am really glad for your reading of our paper.\\n\\nWe have rearranged MNIST experiment in a closer setting to LeCun et al.1998,\\nand improved both error rates and training speeds.\\nThe paper with this results will be soon appeared in a few days.\\nIn addition, we supplemented a detailed explanation of sampling procedures in section A.\\n\\nWhile LeCun et al.1998 achieved 4.7% error rates, our latest test error rates improved as follows:\", \"sr\": \"23.0%(right after initialization) -> 9.94%(after BP training)\", \"sbp\": \"90.0%(right after initialization) -> 8.30%(after BP training)\", \"bp\": \"90.0%(right after initialization) -> 8.77%(after BP training)\\nAnd our further experiments showed that SR with 6,000 hidden units marked 3.66% test error rates right after initialization.\\n\\n> It would be really helpful to have a notion of how expensive it is compute the approximation of the parameter density and to sample from it. Judging from the formulas this does not seem cheap. \\n\\nAs you expected, rigorous calculation/sampling of the oracle distribution is difficult, especially in a high dimensional input space. This sampling difficulty is discussed in a supplemental section A.1.\\nIn order to draw samples from a high dimensional oracle distribution, therefore, we have developed and used an annealed sampling technique, which is described in a supplemental section A.2.\\nAlso, in the rearranged MNIST experiment section, the empirical sampling time was listed.\\n\\n> The paper studies networks with sigmoid pairs. What can the authors say about sigmoid units? \\n\\nAs our method is derived from an analysis of sigmoid pairs networks (SPNs),\\nless study is done for completely discrete sigmoid units networks (SUNs).\\nDirect derivation is that an SPN with J sigmoid pairs might have an equivalent representation ability to an SUN with 2J sigmoid units. Our preliminary experiments empirically supports this hypothesis, that is, SR initialized SPN with J sigmoid pairs sometimes scores almost equivalent error rate with BP trained SUN with 2J sigmoid pairs. However the precise comparison is not conducted.\\nObviously SPN is always SUN, however the converse that an well trained SUN forms SPN, is doubtful.\\nIn relation to other integral representation study, some authors( Carroll and Dickinson1989; Barron1993; Kurkova2009) published on the integral representation of SUN and they would be help.\\nIn our authority paper Murata1996, the SPN requirement comes from the integrability of the composing kernel, and the author suggests that the derivative of sigmoid units (which is bell shaped and integrable) is also eligible, in which case a SUN is interpreted as approximating a derivative of target function.\\n\\n> In Figure 1 left, the figure does not show that the support is non-convex, as claimed in the caption.\\n> The axes labels in Figure 1 are too small.\\n\\nI am sorry for my lack of attention, I have replaced Figure 1 as correct version.\\n\\nI am looking forward to your reply, thanks.\\nSonoda\"}", "{\"reply\": \"Dear Reviewer a97a\\nThanks for your constructive comments.\\nI am really glad for your reading of our paper.\\n\\nWe have rearranged the MNIST experiment, and replaced the results with new ones.\\nIn addition, we supplemented a detailed explanation of sampling procedures in section A.\\nThe renewed version of the paper will be soon appeared online in a few days.\\n\\n> It would be useful to have more information on the order of magnitude by which the method is slower/faster compared to training a classically initialized neural network, and how does the method scale with the number of data points and the dimensions of the input space.\\n\\nTheoretical considerations on sampling cost is discussed in the supplemental section A, and the empirical measurement of computation time is listed in the rearranged MNIST experiment section.\\n\\nWe have introduced a drastically annealed sampling technique in section A.2, which is as fast as sampling from normal distribution.\", \"here_is_a_snippet_of_time_comparison\": \"- sampling time of SR: 0.0115 [sec.]\\n- regression time of SR: 2.60 [sec.]\\n- 45,000 iterations of BP training of SR: 2000 [sec.] (0.05 [sec.] per one itr.)\\n\\nTheoretically the annealed sampling scales linearly with the number of required hidden parameters and the dimensionality of the input space respectively.\\nIn particular it scales constantly with the number of training examples because it conducts sampling with one particular example.\\n\\n> I am concerned about the validity of the MNIST experiment where a baseline error of >0.8 (80%?) is obtained with 1000 samples while other papers typically report 10% error for similar amount of data.\\n\\nWe have continued further investigations on MNIST dataset in a closer setting to LeCun et al.1998, and improved both error rates and training speeds.\\n\\nWhile LeCun et al.1998 achieved 4.7% error rates in a similar setting to ours,\", \"our_latest_test_error_rates_improved_as_follows\": \"\", \"sr\": \"23.0%(right after initialization) -> 9.94%(after BP training)\", \"sbp\": \"90.0%(right after initialization) -> 8.30%(after BP training)\", \"bp\": \"90.0%(right after initialization) -> 8.77%(after BP training)\\nAnd our further experiments showed that SR with 6,000 hidden units marked 3.66% test error rates right after initialization.\\n\\nI am looking forward to your reply, thanks.\\nSonoda\"}", "{\"reply\": \"Dear Reviewer 0a0f\\nThanks for your detailed and helpful comments.\\nI apologize for my ambiguous description,\\nI am really glad for your reading of our paper.\\n\\nWe have supplemented a detailed explanation of sampling procedure,\\nand replaced the MNIST experiment with further investigated version.\\nThe sampling algorithm runs as quick as sampling from ordinary distributions such as normal distribution.\\nThe updated paper is to be online in a few days.\\n\\n> 1 - sampling the parameters of the first layer using Importance sampling or an accept-reject MCMC method (both methods are apparently confused by the authors) in a data dependent way.\\n\\nCertainly I misused 'importance sampling' without following explanations, they are replaced in the modified paper.\\nIt is also explained in the supplementary section A that we just used acceptance-rejection sampling method, and not MCMC.\\nAs the oracle distribution usually has an extremely multimodal shape, MCMC could not perform sufficiently. One of our preliminary experiment showed that they often fail to find some of modes.\\n\\n> This may be understood as a 'pre-training scheme', not as an 'initialization'.\", \"i_have_taken_you_meant_that\": \"1) both 'pre-training' and 'initialization' are preceding processes to the real training,\\n2) 'pre-training' is associated with data and 'initialization' is not,\\n3) the proposed method contains data dependent sampling and regression, which obviously use the data,\\n4) therefore, we should call it 'pre-training' instead of 'initialization'.\\n\\nI am sorry if I am misunderstanding.\\nI completely agree to 1), whereas for 2), I think there would be indefiniteness.\\nI recognize that 'pre-training' has narrow meaning, which typically reminds people 'unsupervised' such as RBMs and Stacking AEs (and their variations). While our oracle distribution contains the information of both input and output vectors (see, for instance, Eq.3 and 8).\\nOn the other hand I think that 'initialization' has broader meaning, independent of using given data or not. As I surveyed in Section 1, many types of 'initialization's have been proposed and some of them contains data dependent way such as linear regression (Yam and Chow) and prototypes (Denoeux and Lengelle). Therefore I feel less necessity for calling it 'pre-training'.\\n\\n> for instance stating that (Efficient Backprop, Le Cun 1998) proposes to initialize neural networks by sampling from a uniform distribution [-1/sqrt(fan-in);1/sqrt(fan-in)] when it suggests in fact to sample from a normal distribution of mean zero and standard deviation sigma=1/sqrt(fan-in).\\n\\nThanks again for your precise correction.\\nI have corrected the description from range to standard deviation of the distribution.\\nI am afraid, however, in Efficient Backprop, 'normal' distribution is not necessarily required.\\n\\n> does not take the pre-training time into account.\\n> This is especially worrisome since the pre-training scheme relies on MCMC sampling which is usually very computationally expensive compared to back propagation\\n\\nSorry for my lack of attention since we did not use MCMC and the sampling time was enough quicker than BP iterations.\\nWe added a list of time comparison in the renewed MNIST experimental section.\", \"here_is_a_snippet_of_time_comparison\": [\"sampling time of SR: 0.0115 [sec.]\", \"regression time of SR: 2.60 [sec.]\", \"45,000 iterations of BP training of SR: 2000 [sec.] (0.05 [sec.] per one itr.)\", \"> inconsistent with previous work when then give a test error rate for back-propagation and 300 hidden units around 90% when it should be around 1.6% (cf. Mnist dataset website).\"], \"the_inconsistency_was_caused_by_the_difference_between_experimental_settings\": \"1) the number of hidden units: same (300 units)\\n2) the number of hidden layers: same (1 layer)\\n3) scaling of input vectors: NOT same\\n - In LeCun et al.1998 (the website setting) the input vectors were scaled, while ours not.\\n -> We scaled them in the renewed setting\\n4) preprocessing of input vectors: NOT same\\n - In LeCun et al.1998, 1.6% marking '2-layer NN, 300 HU' used 'deskew'ed image, while we do not.\\n Therefore, 4.7% marking '2-layer NN, 300 hidden units, mean square error' should be the closest setting. It still differs in that they used mean square error, while we used cross-entropy loss.\\n -> We set our goal around 4.7%\\n5) the representation of output labels: NOT same\\n - In our previous setting we used 'One-of-k' coding, while LeCun et al.1998 not.\\n -> We rearranged vectors as 'random coding' it still differs but more standard and efficient setting.\\n6) the number of training examples: NOT same\\n - In LeCun et al.1998, they used 15,000 and more, while we used just 1,000.\\n -> In our renewed setting, we used 15,000 examples for training.\", \"our_latest_test_error_rates_improved_as_follows\": \"\", \"sr\": \"23.0%(right after initialization) -> 9.94%(after BP training)\", \"sbp\": \"90.0%(right after initialization) -> 8.30%(after BP training)\", \"bp\": \"90.0%(right after initialization) -> 8.77%(after BP training)\\n\\nAlso, SR with 6,000 hidden units marked 3.66% test error rates right after initialization.\\n\\nI am looking forward to your reply, thanks.\\nSonoda\"}", "{\"title\": \"review of Nonparametric Weight Initialization of Neural Networks via Integral Representation\", \"review\": \"This paper introduces a new method for initializing the weights of a neural network. The technique is based on integral transforms. The function to learn f is represented as an infinite combination of basis functions weighted by some distribution. Conversely, this distribution can be obtained by projecting the function f onto another (related) set of basis functions evaluated at every point x in the input space.\\n\\nThe powerful analytic framework yields a probability distribution from which initial parameters of the neural network can be sampled. This is done using an acceptance-rejection sampling method. In order to overcome the computational inefficiency of the basic procedure, the authors propose a coordinate transform method that reduces the rejection rate. It would be useful to have more information on the order of magnitude by which the method is slower/faster compared to training a classically initialized neural network, and how does the method scale with the number of data points and the dimensions of the input space.\\n\\nThe experimental section consists of three experiments measuring the convergence of learning for various datasets (two low-dimensional toy examples and MNIST). On the low-dimensional toy examples, the proposed initialization is shown to be superior to uniform. However, these two datasets are to a certain extent already well modeled by local methods, for which good initialization heuristics are readily available (e.g. RBF networks + k-means). I am concerned about the validity of the MNIST experiment where a baseline error of >0.8 (80%?) is obtained with 1000 samples while other papers typically report 10% error for similar amount of data.\"}", "{\"review\": \"In responce to our reviewer's comments, we have supplemented a detailed explanation of sampling procedure, and replaced the MNIST experiment with further investigated version.\\n\\nThe sampling algorithm runs as quick as sampling from ordinary distributions such as normal distribution.\\nThe updated paper is to be online in a few days.\"}", "{\"title\": \"review of Nonparametric Weight Initialization of Neural Networks via Integral Representation\", \"review\": \"This paper presents a new method for initializing the parameters of a feedforward neural network with a single hidden layer. The idea is to sample the parameters from a data-dependent distribution computed as an approximation of a kernel transformation of the target distribution.\\n\\n* The parameter initialization problem is important and the main idea of the paper is interesting. \\n\\nNow, computing the transformation of the target distribution is the same as solving an equation for the parameters of the network, analytically, assuming an unlimited number of hidden units. As this is a difficult problem, the method relies on an approximation of the parameter density and sampling therefrom N times when the actual network is assumed to have N hidden units. \\n\\n* It would be really helpful to have a notion of how expensive it is compute the approximation of the parameter density and to sample from it. Judging from the formulas this does not seem cheap. \\n\\n* The paper studies networks with sigmoid pairs. What can the authors say about sigmoid units? \\n\\nIn Figure 1 left, the figure does not show that the support is non-convex, as claimed in the caption. \\n\\nThe axes labels in Figure 1 are too small.\"}" ] }
DETu4zMyQH4kV
Semistochastic Quadratic Bound Methods
[ "Aleksandr Y. Aravkin", "Anna Choromanska", "Tony Jebara", "Dimitri Kanevsky" ]
Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.
[ "methods", "comparison", "techniques", "variety", "settings", "conditional random fields", "logistic regression", "latent gaussian models", "semistochastic quadratic bound" ]
submitted, no decision
https://openreview.net/pdf?id=DETu4zMyQH4kV
https://openreview.net/forum?id=DETu4zMyQH4kV
ICLR.cc/2014/conference
2014
{ "note_id": [ "MJ29KaZP96KsB", "f6UJcohNWS-vg", "WpCbeNsJXqeuj", "GXPfmkNE8um1e", "Jxjf_jmDfngE0", "dd1X31kbEJ3zP" ], "note_type": [ "review", "review", "review", "review", "review", "review" ], "note_created": [ 1392710280000, 1392137580000, 1390940460000, 1391723820000, 1391906880000, 1390940460000 ], "note_signatures": [ [ "Anna Choromanska" ], [ "anonymous reviewer 7b7e" ], [ "Anna Choromanska" ], [ "anonymous reviewer 7e18" ], [ "anonymous reviewer a474" ], [ "Anna Choromanska" ] ], "structured_content_str": [ "{\"review\": \"We would like to thank all the reviewers for their comments. The paper after revisions was posted on arxiv today and will be soon publicly visible. Below we enclose answers to reviewers comments:\", \"reviewer_1\": \"We thank the reviewer for the comments. We have highlighted representation learning as an important application in the introduction, and furthermore pointed out that our majorization methods have been applied to this area in previous work in the context of batch learning. Since the main contribution of this paper is to propose a semi-stochastic extension, the results are directly applicable to representation learning. Specific extensions for deep learning are noted as future work in the conclusions.\", \"reviewer_2\": \"We thank the reviewer for the comments.\\n1) Hessian-free optimization (also known as truncated Newton methods) solve Newton systems (and damped/inexact variants) inexactly to obtain descent directions. While classic methods use full gradients (and so are not competitive in the stochastic setting, w.r.t passes through the data), recent work also uses subsampling for gradient and Hessian approximations. Depending on the accuracy to which second order terms are approximated, these methods may require more or less storage than the method we propose. \\n\\nWe emphasize that our main contribution in this paper is to show majorization methods can be used in a semi-stochastic setting to compete with state-of-the-art fully stochastic methods, like SGD, ASGD or SAG, with respect to passes through the data.\\n\\n2) We presented the results on six datasets, three of them are sparse and the remaining ones are dense. The number of examples in these datasets are between 12K and 580K. Among these datasets we have some with 4932 dimensions and 46236 dimensions. These are not small datasets, and in fact commonly used to test new algorithms in the community (see e.g. Le Roux, Schmidt, Bach 2012). \\n\\nWith regard to Hessian-free optimization, we leave it to future work to compare majorization-based schemes with truncated Newton-based schemes. As far as we know, Hessian-free optimization methods have not been compared with state-of-the-art stochastic methods with respect to passes through the data. Also, in the batch setting, majorization methods have recently performed favorably in comparison to full Newton methods and quasi-Newton methods.\\n\\n3) Theorem 1 part 1 is correct - it neither relies on E[Sigma_S^{-1}] = Sigma^{-1}, nor claims the average direction is unbiased. The point is that the average direction (s^k in the paper), although biased, is still gradient related, which we show in part 2. Note that in the definition of s^k = E[(Sigma_S^k + eta I)^{-1}] g^k, the expectation is taken after the inverse. \\n\\nUsing the framework of Bertsekas and Tsitsiklis, it is enough to have the average direction be gradient related 'enough', which is shown in (7). \\n\\n4) Majorization methods have been shown to be competitive with second order optimization methods in prior work, in the batch setting. The main contribution of this paper is to make these methods competitive in the stochastic setting. While the number of samples used to approximate the gradient increases across iterations until the entire training data is eventually used (we control this rate of increase as explained in the experimental section), the number of samples used to obtain the second order bound-based approximation remains capped at a small number, as also explained in the experimental section. Growing the mini-batch is essential to achieve linear convergence rate as captured in Theorem 2; otherwise the rate of convergence would be sublinear, which is the typical convergence rate of state-of-the-art fully stochastic methods, like e.g. SGD. \\n\\nIt is not clear what is meant by 'if other methods grow their minibatch'. It is always possible that other methods can be improved, either using some ideas in this paper or other ideas. Our main contribution is to show that majorization-based methods are competitive with state-of-the-art in the stochastic setting.\", \"reviewer_3\": \"We thank the reviewer for the comments.\\n1) The summary of the paper provided by the reviewer is a nice; however `quadratic bound' is a better title, since the quadratic approximation is based on the bound for the partition function, rather than the Hessian or a standard Hessian approximation. \\n\\n2) We leave comparisons with Byrd et al. to future work as their setting differs from ours. In particular Byrd et. al. do not compare to state-of-the-art stochastic methods in their 2012 paper; their experiments show that dynamically growing the batch size is faster than a full batch method of the same type, and more accurate than a Newton-type method that uses a fixed sample size; they also compare with OWL. In their 2011 paper, they consider a batch-gradient, sampled Hessian approach. \\n\\n3) We refer the reviewer to Jebara & Choromanska 2012, where the bound method was shown to outperform BFGS and Newton methods in the batch setting (for convex problems), intuitively due to adapting not only locally, like most methods including gradient methods and Newton-like methods, but also globally, to the underlying optimization problem. The main motivation is that when a global tight bound is used, optimization methods based on majorization are competitive with standard optimization methods for problems involving the partition function. \\n\\n4) We used the phrase 'maximum likelihood inference' in the context of \\nmaximum likelihood estimation. Thanks for pointing out this issue, the text has now been corrected. \\n\\n5) Different batching strategies indeed have a long history. We found that the theoretical arguments in Friedlander & Schmidt (2011) strongly motivate a batch growing scheme, which we call semi stochastic, and which they also call 'hybrid' (on the other hand, by stochastic we mean having fixed size mini-batch). They show that the convergence rate is a function of the conditioning of the problem and the degree of error in the gradient approximation; therefore, as long as the latter is dominated by the former, we can recover a linear rate in the stochastic setting. \\n\\n6) We indeed used word 'iterations' to refer to passes through the data in the phrase pointed by the reviewer. Thanks for pointing out this issue - the text has now been corrected. \\n\\n7) We have corrected the introduction according to reviewers suggestions. We have added the comment that we focus on partition functions, which are of central interests in many learning problems, like training CRFs or log-linear models. We also indicated early in the introduction that we will be extending the work of Jebara & Choromanska (2012) to the semi stochastic setting. \\n\\n8) The Bound Computation subroutine is directly taken from the previous work of Jebara & Choromanska (2012) (see Algorithm 1). The order indeed matters and slightly affects S, but not z or r. Jebara & Choromanska (2012) investigated various ordering schemes and noted no significant difference in performance (see page 2).\\n\\n9) The reviewer is correct,n was indeed undefined. n is now defined when Omega is introduced. \\n\\n10) Omega is indeed the set of values that y can take. We thank the reviewer, this issue has been fixed. \\n\\n11) We agree with the reviewer - current results show that curvature information can be incorporated in a way that guarantees convergence to stationarity under weak assumptions (Theorem 1) and the recovery of a linear rate provided an aggressive batch growing scheme for logistic regression (Theorem 2); in both cases the use curvature approximations from samples is not proven to help. We are also interested in finding a stronger result of the type that the reviewer is suggesting, perhaps under suitable assumptions on the problem class, but we leave this to future work. \\n\\n12) Note that additional requirements are required to ensure a uniform lower bound on the smallest eigenvalue in the absence of regularization. When eta is present, it provides such a lower bound; so we have removed the corollary and simply noted this. \\n\\n13) The problem with missing rho was corrected. We thank the reviewer for pointing this out.\\n\\n14) We agree with the point made by the reviewer - one recovers the linear rate by essentially controlling the error from sampling to be bounded by the geometric terms from the deterministic setting. The reviewer's point is well taken - the theorem does not show that inverse of the curvature matrix is actually helping, as the empirical results suggest. \\n\\n15) The fact that inexact solutions to subproblems can be interpreted as regularization is often used in the inverse problems community, and explained in some detail in Vogel's book. This is in fact the reference we wanted to cite, and we are very grateful that the reviewer caught this error! We have also added the reference that the reviewer suggested. \\n\\n16) We thank the reviewer for the remark about Page 7, section 12.1, Lemma 4 - this has been noted prior to the presentation of the lemma.\"}", "{\"title\": \"review of Semistochastic Quadratic Bound Methods\", \"review\": \"This paper looks at performing a stochastic truncated Newton method to general linear models (GLMs), utilizing a bound to the partition function from Jebra & Choromanska (2012). Some basic theory is given, along with some experiments with logistic regression.\\n\\nStochastic or semi-stochastic truncated Newton methods such as Hessian-free optimization, and the work of Byrd et al. have already been applied to learning neural networks whose objective functions correspond to the negative LL of neural networks prediction under cross entropy error, which is like taking a GLM replacing theta in the definition expression with g(theta), where g is the neural network function. In the special case that g=I this correspond exactly to logistic regression. \\n\\nOne thing I'm very confused about is what the bounding scheme of Jebra & Choromanska (2012) that is applied in this paper actually does. Since it involves summing over all possible states, it can't be more efficient than just computing the partition function, and its various derivatives and second-directives directly. Why use it then? The Hessian of the negative LL of a general linear model will already be PSD, so that can't be the reason.\", \"detailed_comments\": \"\", \"abs\": \"What do you mean by 'maximum likelihood inference'? Do you mean estimation? Learning?\", \"page_1\": \"When you say that stochastic methods converge in less iterations that batch ones, this makes no sense. Perhaps you meant to say passes over the training set, not iterations.\", \"page_2\": \"The abstract made prominent mention of partition functions. Yet, the introduction doesn't make any mention of them, and seems to be describing a new optimization method for standard tractable objective functions.\\n\\nYour intro should mention that you will be focusing on generalized linear models (what you are calling generalized linear model) and extending the work of Jebra & Choromanska (2012) to the stochastic case. This becomes clear only once the author has read well passed the intro.\", \"page_3\": \"'dataset Omega'? I thought Omega was the set of values that y can take.\", \"page_4\": \"The statement and proof of Theorem 1 seems similar to one of the theorems from the Byrd et al (2011) paper you cite. And like that result, it is extremely weak. Basically all it says is that as long as the curvature matrices are not so badly behaved that the size of their inverses grow arbitrarily, multiplying the gradient by their inverses won't prevent gradient descent from converging, provided the learning rate becomes small enough to combat the finite amount of blowing up that there is. It says nothing about why you might actually *want* to multiply by the inverse of this matrix. But I guess in this general setting there is nothing stronger that can be shown, since in general, multiplying the the inverse curvature matrix computing on only a subset of the data may sometimes do a lot more harm than good.\", \"page_6\": \"Could you elaborate more on the point 'further regularizes the sub-problems'? There is a detailed discussion of this kind effect in 'Training Deep and Recurrent Neural Networks with Hessian-Free Optimization', section 8.7. What in particular does [38] say about this?\", \"page_7\": \"In section 12.1 from the above mentioned article, a similar result to Lemma 4 is proved. It is shown that the quadratic associated with the CG optimization is bounded, which implies the range result (since if the vector is not in the range, the optimization must be unbounded). They look at the Gauss-Newton matrix, but for general linear models, the Hessian has the same structure, or the matrix from the bound from Jebra & Choromanska (2012) has the same basic structure as this matrix.\"}", "{\"review\": \"Dear readers and reviewers, we have just updated the paper on arxiv. In particular, we simplified Inequality (9) and Proof 6, clarified the statement of Theorem 2, and fixed accidental typos.\"}", "{\"title\": \"review of Semistochastic Quadratic Bound Methods\", \"review\": \"The paper describes a second order stochastic optimization method where the gradient is computed on mini-batches of increasing size and the curvature is estimated using a bound computed on a possibly separate mini-batch of possibly constant size. This is clearly a state-of-the-art method. The authors derive a rather complete ensemble of theoretical guarantees, including the guarantee of converging with a nice linear rate. This is a strong paper about stochastic optimization. In the specific context of ICLR, I regret that the authors did not explain why this technique is useful to learn representations. The basic setup is that of maximum likelihood training of an exponential family model. In practice, there are many reasons to believe that such a technique would work on mixture models or models that induce representations (although the theory might not be as simple.) I believe that this paper should be accepted provided that the author pay at least some lip service to 'representation learning'.\"}", "{\"title\": \"review of Semistochastic Quadratic Bound Methods\", \"review\": \"The paper introduces a certain second-order method that is based on quadratic upper-bounds to convex functions and\\non slowly increasing the size of the batch. A few results show that the method is well-behaved and has reasonable\\nconvergence rates on logistic regression. \\n\\nThis work is very similar to Hessian-free optimization, because it also uses CG to invert low-rank approximations \\nto the curvature matrix, and it has comparable cost but greater memory complexity due to its need to store many\\nparameter vectors. Likewise, it builds up on previous work that finds quadratic upper bounds to convex functions,\\nbut a quadratic upper bound seems restrictive, and perhaps a quadratic approximation would be more appropriate.\", \"pros\": \"Method is somewhat novel.\", \"cons\": [\"Experiments very small and unrealistic (sometimes tens of dimensions), and there is no comparison with Hessian-free optimization\", \"Theorem 1 part 1 is wrong: the method is biased, because while E[Sigma_S] = Sigma, E[Sigma_S^{-1}] != Sigma^{-1}. In general,\", \"second order methods that use modest numbers of samples for the curvature matrix are necessarily biased.\", \"The paper has two ideas: a certain second order method, and a simple scheme for growing the minibatch. But which of these is essential? Would we get similar results if we didn't grow the minibtach? How large would the minibatch end up being? What if the other methods grow their minibatch as well, do\", \"they become competitive?\"]}", "{\"review\": \"Dear readers and reviewers, we have just updated the paper on arxiv. In particular, we simplified Inequality (9) and Proof 6, clarified the statement of Theorem 2, and fixed accidental typos.\"}" ] }
O_cyOSWv8TrlS
Neuronal Synchrony in Complex-Valued Deep Networks
[ "David Reichert", "Thomas Serre" ]
Deep learning has recently lead to great successes in tasks such as image recognition (e.g Krizhevsky et al., 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to real cortical circuits. The challenge is to identify which neural mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile deep representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter aspect, we demonstrate the potential of the approach in several simple experiments. Thus, synchrony could implement a flexible mechanism that fulfills multiple functional roles in deep networks.
[ "deep networks", "neuronal synchrony", "spike timing", "great successes", "tasks", "image recognition", "krizhevsky et", "power" ]
submitted, no decision
https://openreview.net/pdf?id=O_cyOSWv8TrlS
https://openreview.net/forum?id=O_cyOSWv8TrlS
ICLR.cc/2014/conference
2014
{ "note_id": [ "HkA9HPn1mY7Ol", "ykGzyhr0mas8E", "au6kl4HEAJuS5", "QuNxu31HJPlct", "0oL8oJVsYd0Xl", "AAJOKj49HYATj", "9VimVyzHOKV8h", "bX1QeFiTtMb-r" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1392059880000, 1392694980000, 1390026840000, 1391115900000, 1389649440000, 1389121680000, 1392377100000, 1390026840000 ], "note_signatures": [ [ "anonymous reviewer 0ae5" ], [ "David Reichert" ], [ "Sainbayar Sukhbaatar" ], [ "anonymous reviewer 4c84" ], [ "David Reichert" ], [ "Tapani Raiko" ], [ "anonymous reviewer ec9e" ], [ "Sainbayar Sukhbaatar" ] ], "structured_content_str": [ "{\"title\": \"review of Neuronal Synchrony in Complex-Valued Deep Networks\", \"review\": \"SUMMARY\\n \\nThe paper 'Neural Synchrony in Complex-Valued Deep Networks' tackles the important question of how it is that neural representations encode information about multiple objects independently and simultaneously.\\nThe idea that the relative phase of periodic neural responses has been in circulation for some time (and this paper provides a good overview of relevant literature), but to date the principle has not gained traction in the pattern recognition side of neural modeling. This paper aims to change that by showing that a complex-valued Deep Boltzmann Machine can naturally segment images (in some simple synthetic cases) according to the various visible objects, through the phase of latent responses. \\n \\nThe basis for the technical contribution of this paper is a novel response function. The [complex-valued] response function described by equations 1 and 2 is a function of a complex-valued weight vector applied to a complex-valued feature vector. Output z_j = r_j e^{i \\theta_j} of each model neuron is determined by arg(z_j) = arg(w . x)\\n \\n f( alpha |sum_j w_j . x_j | + \\beta sum_j (w_j |x_j|) )\", \"comment\": \"the authos mention that the two terms in f() can be weighted, but don't include those weights in Eq. 2, as I have done above (alpha, \\beta).\\n \\nWhy use this function? The authors make an intuitive argument in the text that these two terms capture salient aspects of a more detailed spiking network based on Hodgkin-Huxley neurons, and Figure 1 illutrates the effect quantitatively for a specific, simple 2-to-1 feedforward network of rhythmic neurons. The value of this transfer function as a surrogate for detailed compartmental models is interesting, but is not the focus of the remainder of the paper.\\n \\n \\nThe paper's section 3 'Experiments: the case of binding by synchrony' was somewhat difficult for me to understand. A [conventional, real-valued] DBM was trained on small pictures with horizontal and vertical bars, and then 'converted' to a complex-valued network (and was the activation function changed to the one from Eq. 2? What does that mean in terms of inference in the DBM?) It was found that when clamping the visible-unit magnitudes to a particular picture, and 'sampling' (is this actually sampling from a probability distribution?) their phase and the hidden units' magnitude and phase, that there were groups of hidden units with phases that lined up with particular bars. This is good because it suggests a means of teasing apart the DBM's latent representation into groups that are 'working together' to represent something independently from other groups. (I wanted to see some sort of control trial, showing that a plain old real-valued DBM could not achieve the same thing, but I can't really think of the right thing to try.)\\n \\nThe demonstration in Figure 3 shows that already with this set of bars, that there is an issue of phase resolution: it appears that four different bars are all coded the same shade of green. Is this a problem? A readout mechanism might be confused and judge all these bars to be one object, even though bars occur independently in the training data. Figure 3 illustrates what happens after 100 iterations of sampling, what happens after more iterations? Do the co-incidentally green bars change color independently of one another?\\n\\nOverall, this research is highly relevant to the aims of the ICLR conference. It is at an early stage of development, in that no learning algorithm has been adapted to work with these complex-valued neurons (although the authors might consider adapting the ssRBM), the images used in the experiments are simple and synthetic, and the authors themselves lament that 'conversion' of a DBM is unreliable. Still, the idea of phase-based coding has a lot of potential, and it is worth exploring. This paper would be an important step in that process. I would strongly suggest that the authors upload their Pylearn2 code so that others can reproduce the effects presented in this paper, especially if training and conversion of DBMs is as unreliable as they suggest.\\n \\n \\nNOVELTY AND QUALITY\\n \\n- the use of complex-valued phase to perform segmentation in a DBM is novel\\n\\n- quality of presentation is very good\\n \\n \\nPRO & CON\", \"pro\": \"phase-based segmentation is an intriguing idea from theoretical neuroscience, it's great to see it put to the test in engineering terms\", \"con\": \"no natural learning algorithm yet for the model\"}", "{\"review\": \"We thank the reviewers for the fair reviews. For simplicity, below we refer to 'Anonymous 4c84', 'Anonymous ec9e', and 'Anonymous 0ae5' as reviewer 1-3, respectively.\\n\\nWe agree with the overall assessment of the reviewers. This is early work that intends to communicate a potentially powerful idea, backed up by simple experiments. There are several avenues for extending it towards principled theoretical frameworks and to address e.g. the need for learning, and we were hoping for feedback from the community to that end.\\n\\nPerhaps the main issue, as raised by reviewer 1, is whether this work would be better suited to a short workshop paper, given its early stage. This is a fair point. At the same time, given the amount of material we had, and given the solid 'story' that we wanted to communicate, it really didn't make sense to us not to write a full-length paper. For a conference track submission, we think the strengths of the work, such as its originality (to this audience) and quality of presentation, could overcome its shortcomings (but of course that's ultimately up to the reviewers and the chair to decide). We can also address several of the concrete concerns raised by the reviewers, and will do so in the following.\\n\\n\\n****** Changes to paper ******\\n\\nWe are in the process of uploading an updated paper (v4, ETA Feb 18 7pm EST) to the arXiv, which takes the reviewers' comments into account. We clarified notation in the main text (now slightly breaking the 9 pages limit, which should be acceptable at this point) and wording in appendix B. We also added some of the main issues raised by the reviewers and our discussion thereof as appendix C, whenever it made sense to expand on points that were only briefly touched on in the main text.\\n\\n\\n****** Reviewer 1 & Reviewer 2 ******\\n\\n'It is not clear that the segmentation is working for the bars experiment because multiple bars are colored by the same phase. What is the goal here? that each bar has a different, unique phase value? Does the underlying phase distribution effectively partition the bars? '\\n\\n'there is an issue of phase resolution: it appears that four different bars are all coded the same shade of green. Is this a problem? A readout mechanism might be confused and judge all these bars to be one object, even though bars occur independently in the training data.'\\n\\nThis is a very important issue that we are still considering. It is perhaps an issue more generally with the underlying biological theories rather than just our specific approach. As we noted in the paper, some theories pose that a limit on how many discrete objects can be represented in an oscillation cycle, without interference, explains certain capacity limits in cognition. The references we cited (Jensen & Lisman, 2005; Fell & Axmacher, 2011) refer to working memory as an example (often 4-7 items; note the number of peaks in Figure 3c -- obviously this needs more quantitative analysis). We would posit that, more generally, analysis of visual scenes requiring the concurrent separation of multiple objects is limited accordingly (one might call this a prediction -- or a `postdiction'? -- of our model). The question is then, how does the brain cope with this limitation? As usual in the face of perceptual capacity limits, the solution likely would involve attentional mechanisms. Such mechanisms might dynamically change the grouping of sensory inputs depending on task and context, such as whether questions are asked about individual parts and fine detail, or object groups and larger patterns. In the bars example, one might perceive the bars as a single group or texture, or focus on individual bars as capacity allows, perhaps relegating the rest of the image to a general background. \\n\\nDynamically changing phase assignments according to context, through top-down attentional input, should, in principle, be possible within the proposed framework: this is similar to grouping according to parts or wholes with top-down input, as in the experiment of Section 3.2.\\n\\n\\n****** Reviewer 1 ******\", \"regarding_the_similarity_to_rao_et_al\": \"as we've acknowledged in the paper, the work is similar in several points (we arrived at our framework and results independently and were not aware of Rao et al.'s work initially -- we do not think the latter is particularly well known in this community). However, we would want to counter the impression that our work does not provide additional contributions. First of all, to clarify the issue of training on multiple objects: in Rao et al.'s work, the training data consists of a small number of fixed 8x8 images (N <= 16 images *in total* for a dataset), containing simple patterns (one example has 4 small images with two faces instead). To demonstrate binding by synchrony, two of these patterns are superimposed during test time. We believe that going beyond this extremely constrained task, in particular showing that the binding can work when trained and tested on multiple objects, on multiple datasets including MNIST containing thousands of (if simple) images, is a valid contribution from our side, which is not diminished by the fact that this result relies on the capability of the DBM (indeed, showing that this works with the DBM is itself a contribution as it might tell us something interesting about the kind of representations learned by a DBM that is not usually made explicit).\\n\\nSimilarly, as far as we can see, Rao et al. do not discuss the gating aspect at all (as we mentioned in our paper), nor the specific issues with excitation and inhibition (Section 2.1) that we pointed out as motivation for using both classic and synchrony terms. Lastly, the following issues are addressed in our experiments only: network behavior on more than two objects; synchronization for objects that are not contiguous in the input images, as well as part vs. whole effects (Section 3.2); decoding distributed hidden representations according to phase (Section 3.3; in particular, it seems to be the case that Rao et al.s networks had a localist (single object<->single unit) representation in the top hidden layer in the majority of cases).\\n\\n'the introduction of phase is done in an ad-hoc way, without real justification from probabilistic goals [...]'\\n\\nWe agree that framing our approach as a proper probabilistic model would be helpful and perhaps more convincing to this audience (e.g. using an extension of the DUBM of Zemel et al., 1995, as discussed in the paper). At the same time, we think there is value to presenting the heuristic as is, based on a specific neuronal activation function, to emphasize that this idea could find application in neural networks more generally, not only those with a probabilistic interpretation/Boltzmann machines (that our approach is divorced from any one particular model is another difference when compared to Rao et al.'s work). In particular, we have performed exploratory experiments with networks trained (pretrained as real-valued nets or trained as complex-valued nets) with backprop, including (convolutional) feed-forward neural networks, autoencoders, or recurrent networks, as well as a biological model of lateral interactions in V1. We agree with the reviewer that a more rigorous mathematical and quantitative analysis is needed in any case.\\n\\n'the results in the appendix appear to indicate that the approach is not working very well in general, and the best results are the ones shown in the main text.' \\n\\nWe are not sure what exactly the reviewer is referring to here. If it is our statement that our approach of using pretrained real-valued networks does not always work, then yes, that is an issue that needs to be addressed. However, we should perhaps clarify that what we meant is: for some datasets and training parameters, models did not perform well; in cases where they did perform reasonably well however, that performance was relatively consistent across images, and the results show representative examples from those models. If, on the other hand, the reviewer is referring to results in the supplementary figure supposedly looking different from the figures in the main text, then no, other than perhaps with the schematic overview in Figure 6, we did not purposefully cherry-pick nicer looking results to display (most of the figures were actually cropped from the same larger figures simply for space reasons).\\n\\n'How is the phase distribution segmented? Phase is a continuous variable, and the segmentation/partitioning seems to be done by hand for the examples. This needs to be addressed.'\\n\\nPartitioning was done with k-means, not by hand...? Other options are possible (also depending on whether the aim is a principled machine learning framework or addressing questions about the brain with a biological model). It is true that phase is a continuous variable, however, our results indicate that there is a tendency to form discrete phase clusters, in line with biological models (e.g. the one of Miconi & VanRullen).\\n\\n'Also, what about the overlaps of the bars? these areas seem to be mis- or ambiguously labeled. is this a bug or a feature?' \\n\\nThis is more of a problem with the task itself being ill-defined on binary images, where an overlapping pixel cannot really be meaningfully said to belong to either object alone (as there is no occlusion as such). We plan to use (representations of) real-valued images in the future.\\n\\n\\n****** Reviewer 2 ******\\n\\n'Comment: The text leading up to equation 1 is confusing regarding z. Is it an output or an input? It doesn't seem like we're dealing with a dynamical system, and the input was called x in the paragraph above. '\\n\\nWith x and z we just refer to, respectively, real-valued and complex-valued states in general. There are several notions of input here: the input (image) to the overall network, the units/states providing input to a specific unit, the total 'post-synaptic' input w . z, and the term that is ultimately used as input to the activation function with a real-valued domain (e.g. |w . z|). We have attempted to clarify this in this revision of the paper by introducing some additional variables (perhaps the reviewer could check whether Section 2.1 is clearer now).\\n\\n'Comment: the use of |x| in equation 2 is confusing because presumably |w . x| is a vector norm whereas in w . |x| it denotes elementwise magnitude of the complex elements of x. Right?'\", \"assuming_the_reviewer_meant_to_write_z_not_x\": \"No, |w . z| is also the magnitude, and w . z happens to be a complex scalar; this is the input to a single unit, thus both w and z are vectors and this is a dot product. This should be clearer with the new notation.\\n\\n'Comment: the authors mention that the two terms in f() can be weighted, but don't include those weights in Eq. 2, as I have done above (alpha, \\beta).'\\n\\nWe simply left this out for simplicity, because we do not actually explore unbalanced weightings in this paper and didn't want to introduce unnecessary notation.\\n\\n'A [conventional, real-valued] DBM was trained on small pictures with horizontal and vertical bars, and then 'converted' to a complex-valued network (and was the activation function changed to the one from Eq. 2? What does that mean in terms of inference in the DBM?)'\\n\\nYes the activation function changed; we essentially use the normal DBM training as a form of pretraining for the final, complex-valued architecture. The resulting neural network is likely not exactly to be interpreted as a probabilistic model. However, if such an interpretation is desired, our understanding is that running the network could be seen as an approximation of inference in a suitably extended DUBM (by adding an off state and a classic term; refer to Zemel et al., 1995, for comparison). For our experiments, we used two procedures (with similar outcomes) in analogy to inference in a DBM: either sampling a binary output magnitude from f(), or letting f() determine the output magnitude deterministically; the output phase was always set to the phase of the total input. The first procedure is similar to inference in such an extended DUBM, but, rather than sampling from a circular normal distribution on the unit circle when the unit is on, we simply take the mode of that distribution. The second procedure should qualitatively correspond to mean-field inference in an extended DUBM (see Eqs. 9 and 10 in the DUBM paper), using a slightly different output function.\", \"by_the_way\": \"perhaps we could have framed our work in such terms to begin with, but in a way that obscures what our original line of thinking was.\\n\\n'[...] 'sampling' (is this actually sampling from a probability distribution?)'\\n\\nNo, not exactly. We actually only used the term 'sampling' in the standard, real-valued case, other than in the caption of Figure 3. We will fix the latter.\\n\\n'Figure 3 illustrates what happens after 100 iterations of sampling, what happens after more iterations? Do the co-incidentally green bars change color independently of one another?'\\n\\nPhase assignments appear to be stable (see the supplementary movies), though we did not analyze this in detail. It should also be noted that the overall network is invariant to absolute phase, so only the relative phases matter.\\n\\n'I would strongly suggest that the authors upload their Pylearn2 code so that others can reproduce the effects presented in this paper [...]'\\n\\nWe are happy to publish the code either way, but it would unfortunately take some extra work to put it into a form that is accessible to others. We will do so if the paper gets accepted as a proper conference paper.\\n\\n\\n****** Reviewer 3 ******\\n\\nUnfortunately, we certainly can't lay claim to being the first to explore this idea in a computational framework (see the references cited), though we are perhaps the first to make a connection to the types of deep networks that have recently been employed in the deep learning community (DBMs in this case; also, as stated, the framework could in principle be applied to other deep nets, such as ConvNets). Apart from that, we are of course happy to agree that this a fascinating idea and that it seems worthwhile to bring it to the attention of the ICLR community.\"}", "{\"review\": \"I found this paper very interesting and inspiring.\"}", "{\"title\": \"review of Neuronal Synchrony in Complex-Valued Deep Networks\", \"review\": \"The paper describes a method to augment pre-trained DBMs with phase variables and shows some demonstrations of binding, segmentation, and partitioning (based on latent variables).\", \"pros\": \"The paper is very well written and introduces a number of concepts clearly.\\nPhase is a curious neurophysiological phenomena and is deserving of modeling that addresses the representational consequences/implications.\\nI could see how this ad-hoc approach could be used to understand DNNs (like extended Zeiler and Fergus' visualization work).\\nThe paper may educate the ICLR community on the binding problem and proposals from the neuroscience community that argue for phase as a solution to this problem.\", \"cons\": \"A major issue with the described work is its similarity to the work for Rao and colleagues. The authors provide some comments about how their work is distinguished. However these are merely rhetorical (your approach is general and theirs is not) or actually contributions that are not made by this paper but by previous work (DBMs can be trained to learn representations of multiple-simultaneously presented object/patterns).\", \"the_two_major_limitations_of_the_paper_in_its_current_form\": \"1. the introduction of phase is done in an ad-hoc way, without real justification from probabilistic goals. The authors appear surprised that their hack worked at all. It seems the more rigorous approach would be to either introduce phase as a proper latent variable and train the network to optimize the distribution to match the data distribution (the usual approach to modeling), or to explain more rigorously why this ad hoc extension does not interfere with the network (however it is not even clear from the experiments that the ad-hoc model preserves the properties of the original network). A more rigorous mathematical approach might reveal that the basins of attraction are preserved with the introduction of phase, or that the phase variables are independent of the amplitude variables (I believe they are not).\\n2. The results of the experiments are mostly just pictures and lack quantitative assessment or any controls. The resulting work provides a demonstration of the phase idea for binding/grouping/segmentation, which are not new ideas (although they are probably new ideas to the ICLR community). Furthermore, the results in the appendix appear to indicate that the approach is not working very well in general, and the best results are the ones shown in the main text.\", \"the_procedures_avoid_some_obvious_issues\": \"How is the phase distribution segmented? Phase is a continuous variable, and the segmentation/partitioning seems to be done by hand for the examples. This needs to be addressed.\\nIt is not clear that the segmentation is working for the bars experiment because multiple bars are colored by the same phase. What is the goal here? that each bar has a different, unique phase value? Does the underlying phase distribution effectively partition the bars? Also, what about the overlaps of the bars? these areas seem to be mis- or ambiguously labeled. is this a bug or a feature?\\n\\nOverall, I think this is an interesting direction of research and the exposition is top notch, however the underlying work falls short of some obvious extensions and methodological rigor. I think this would make for a nice workshop paper, so that it could receive some feedback from the community and educate the community of the phase binding idea, but it lacks some ingredients for a conference paper.\", \"some_other_relevant_references_you_might_want_to_include\": \"S. Jankowski, et al. 1996. Complex-valued multistate neural associative memory.\\nT. Nitta. 2009. Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters.\\nC. Cadieu & K. Koepsell 2010. Modeling Image Structure with Factorized Phase-Coupled Boltzmann Machines.\"}", "{\"review\": \"Thanks for the comment.\\n\\nJust to clarify, in the broader context there is plenty of relevant work that we did not discuss, due to limited space (we only discussed closely related work based on complex-valued nets). This includes models using coupled oscillators for segmentation. In particular, see also (and references therein):\\n\\nYu, G., & Slotine, J.-J. (2009). Visual Grouping by Neural Oscillator Networks. IEEE Transactions on Neural Networks, 20(12), 1871\\u20131884. doi:10.1109/TNN.2009.2031678\"}", "{\"review\": \"Thanks for a very interesting paper! I think this approach will have significant impact in future, although it is not ripe for even quantitative analysis yet.\\n\\nI wanted to point out our related work, that I have been planning to continue in a similar direction (as mentioned in our Discussion section):\\nT. Raiko and H. Valpola. Chapter 7: Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention. In From Brains to Systems: Brain-Inspired Cognitive Systems 2010 (ISBN 978-1-4614-0163-6), Advances in Experimental Medicine and Biology, Volume 718, pages 75-86, Springer New York, 2011. http://users.ics.aalto.fi/praiko/papers/bics_chapter.pdf\"}", "{\"title\": \"review of Neuronal Synchrony in Complex-Valued Deep Networks\", \"review\": \"I find this paper deeply fascinating. It illustrates - I believe for the first time - the utility of binding by synchrony in a deep network architecture. Although the general idea of binding by synchrony is an old one, it is mostly a vague idea that has never, to my mind, been put into a concrete computational framework. Here, the authors propose that the phase of oscillation in neural ensembles acts as a kind of 'label' for objects being represented in a distributed fashion. That is, no single unit represents an object, but when an object is presented to the network it activates features at each level which synchronize via two-way communication between levels. The result is a kind of segmentation of objects within the image. This could be useful for example in separating objects from clutter, or possibly in resolving occlusion (although that has not been demonstrated here).\\n\\nAlthough mostly toy examples of patterns are used, I believe that this paper taps into a powerful idea, and that it will be of high interesting to the ICLR community, and certainly to the neuroscience community.\"}", "{\"review\": \"I found this paper very interesting and inspiring.\"}" ] }
BBxkB2w0I_OjZ
Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks
[ "Julian Ibarz", "Ian Goodfellow", "Sacha Arnoud", "Vinay Shet", "Yaroslav Bulatov" ]
Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural-network that operates directly off of the image pixels. This model is configured with 11 hidden layers all with feedforward connections. We employ the DistBelief implementation of deep neural networks to scale our computations over this network. We have evaluated this approach on the publicly available SVHN dataset and achieve over 96% accuracy in recognizing street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art and achieve 97.84% accuracy. We also evaluated this approach on an even more challenging dataset generated from Street View imagery containing several 10s of millions of street number annotations and achieve over 90% accuracy. Our evaluations further indicate that at specific operating thresholds, the performance of the proposed system is comparable to that of human operators and has to date helped us extract close to 100 million street numbers from Street View imagery worldwide.
[ "street view imagery", "accuracy", "number recognition", "arbitrary", "street numbers", "text", "unconstrained natural photographs", "hard problem", "hard" ]
submitted, no decision
https://openreview.net/pdf?id=BBxkB2w0I_OjZ
https://openreview.net/forum?id=BBxkB2w0I_OjZ
ICLR.cc/2014/conference
2014
{ "note_id": [ "lyepyotVxkKoF", "p9jY1AuSYy9M-", "YBcwB2mgG9d1f", "6bml6ARnFNWcA", "cHoRcxbFCpcMx" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1394073180000, 1391897280000, 1391811000000, 1391827860000, 1392802200000 ], "note_signatures": [ [ "Ian Goodfellow" ], [ "anonymous reviewer dee9" ], [ "anonymous reviewer 88a9" ], [ "anonymous reviewer ac02" ], [ "Ian Goodfellow" ] ], "structured_content_str": [ "{\"review\": \"We ran some more experiments to determine the effect of increasing the number of parameters in shallow models. Specifically, we used a model with 3 hidden layers: two convolutional layers followed by a fully connected layer. If we increase the size of the fully connected layer it rapidly starts to overfit. If we increase the size of the convolutional layers, the accuracy increases, but with diminishing marginal utility. We have launched a second round of experiments with even larger convolutional layers. We are hoping to find the point at which they start to overfit so that we can include this result in the final version of the paper. So far we have not been able to match the accuracy of our 11-layer model with this approach.\"}", "{\"title\": \"review of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks\", \"review\": \"The authors propose an integrated approach to sequence recognition in the case of limited number of characters (house numbers with 5 characters at most). Avoiding separating localization, segmentation and recognition is novel in this context. This is the right approach and results are good but the model is not well explained (see below).\", \"pros\": [\"integrated sequence recognition rather than traditional localization/segmentation/recognition step.\", \"new record on single digit\", \"accuracy high enough for real world deployment (although the ability to pay human operators for remaining errors in the 98% regime means that real world deployment is possible even at very low accuracy, depending how much money the company is willing to spend).\"], \"cons\": [\"'and has special code for handling much of the mechanics such as proposing candidate object regions' and 'we take a similar approach, but with less post-processing': this is misleading and debatable that there is less post-processing because the authors methods relies on a pre-detection step that gives relatively tight bounding boxes around the number, as indicated here: 'Beyond cropping the image close to the street number'. One could argue that there is as much post-processing in the other cited work once the detection is performed.\", \"what happens at the top of the network is not clear at all to me. Are they using an HMM or not? Does the 64x64 input image yield a grid of probabilities? then what is the size of that grid? Or is the network directly predicting N outputs, and based on the value of L, uses only the first L values out of N?\", \"'a softmax classifier that is attached to intermediate features': digit softmax are located on the intermediate hidden layers? which ones?\", \"'On this task, due to the larger amount of training data, we did not need to train with dropout': it would have been nice to see the numbers with and without dropout instead of just relying on this claim.\"]}", "{\"title\": \"review of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks\", \"review\": \"This submission describes an approach for digit and sequence recognition that results in improved performance on the StreetView house number dataset using a simple structured output to recognize the entire sequence as an ordered set of detections and a very deep convolutional network (11 layers). The approach is end-to-end, without requiring multiple networks or a composite of techniques.\\n\\nThe approach has very high accuracy and is attractive in its simplicity, although the authors make clear that it could not be extended to, for instance, general text recognition in images. The relevance for ICLR is high, although the paper has some significant omissions that keep it from being a very strong candidate for acceptance.\\n\\nFirst, the method is not clear, in particular the interface between the varia softmax digit classifiers and the output of the convnet. It is unclear whether there is any representation of locality in this interface beyond the assignment of different classifiers for each position. In the introduction, it is stated that the subtasks of localization, segmentation, and recognition are solved in an integrated way, but section 3 introduces the dataset, which has localized inputs- numbers which fill at least \\u2153 of the image. This needs to be clarified.\\n\\nSecond, to further the contribution of the paper, additional experiments or analysis could have been performed to understand the features, the architecture, the loss function, or other aspects of the approach. With these missing, the submission is somewhat thin and the contribution lessened.\", \"smaller_issues\": \"the variables used in the plate model in Fig. 1 need to be defined in the caption and supporting text as well as later when the method is explained, and DistBelief should not be used in the abstract and intro without citation or footnote.\"}", "{\"title\": \"review of Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks\", \"review\": \"The paper presents an application of deep neural networks to the problem of reading multi-digit housenumbers from StreetView images. The basic architecture is essentially standard (maxout units, ReLu units, convolution, and several dense layers), but is unusually deep (11 layers). The output of the detector is a softmax predictor for the length of a sequence, as well as softmax predictors for each digit in the sequence. This simple output encoding is sufficient to achieve a high recall at very high precision that is competitive with human labelers. The authors conclude that this particular OCR application may regarded as solved at this stage.\\n\\nThere is relatively little new in terms of algorithms here, but the results are excellent. \\n\\nThe paper is clearly written, though the prose could be tightened up a bit. If room can be made, I think a deeper analysis of the method\\u2019s success would be useful. For example, is it possible that the \\u201cdeeper\\u201d networks are fitting the training set better as a result of more model parameters? Or is the depth truly the deciding factor?\", \"most_surprising_to_me_is_the_fact_that_there_is_no_explicit_need_to_model_the_label_structure_beyond_the_obvious\": \"a detector for sequence length and softmax outputs for digit classes, with a small tweak to choose the most likely length of the sequence during test time. This follows along with recent work on detection systems that suggests sophisticated regressors are able to do something similar for object classes, so I think this otherwise simple component is a useful datapoint for that conversation.\", \"pros\": \"Simple off-the-shelf application with excellent performance; perhaps high enough to count this task as \\u201csolved\\u201d.\\n\\nA useful reference point for work on predicting structured outputs like character sequences.\", \"cons\": \"Essentially boiler-plate neural network. A bit more detailed analysis of the result would be useful.\"}", "{\"review\": \"In order to avoid the ArXiv approval delay, we have posted a revised copy of our paper directly here:\", \"https\": \"//drive.google.com/file/d/0B64011x02sIkd3RwSDRpTXlKSzQ/edit?usp=sharing\\n\\n'Anonymous 88a9', 'Anonymous ac02' and 'Anonymous dee9' have raised concerns\\nabout the clarity of the description of the architecture and inference process.\\nWe have updated the paper to add Appendix A, a worked example showing exactly\\nhow the inference process works step by step. We hope this resolves the ambiguities\\nin the main text.\\n\\n'Anonymous ac02' suggests to use larger shallow networks to have another\\ndatapoint on the importance of a deep network. This is a great idea and we\\nhave started experiments with models of varying sizes using 3 hidden layers to see\\ntheir improvements based on the number of parameters alone. We will post another\\nupdate when these experiments are complete but didn't manage to finish them during\\nthe recommended rebuttal period.\\n\\n'Anonymous dee9' suggested that we were doing heavy post-processing of the image\\nbecause we were doing tight crops around the street number. We have five responses\", \"to_this_criticism\": \"1) We mistakenly omitted some details about the limitations of our preprocessing\\nof the larger internal dataset. These details make it clear that the preprocessing\\nis not especially useful. Specifically, the centroid is not well-known, the scale\\nis not known at all, and the crop size we use for that dataset is 128x128 compared\\nto the 54x54 we used on the public SVHN. Our results on this dataset therefore\\nindicate that the network is able to localize the house number itself as well as localizing the digits\\nwithin the number. They also demonstrate the network is able to handle wide variations\\nin the scale of the house number.\\n2) It is not computationally practical to run a convolutional network on a\\ncompletely uncropped Street View panorama, so some degree of preprocessing is\\ninevitable.\\n3) On the public SVHN, all pre-existing methods make use of more ground truth\\nlocalization information than our method does. All previous authors who have\\npublished on this dataset use the version that is tightly cropped per-digit.\\nWe use the much looser crop that identifies only the region in which the multi-digit\\nnumber occurs, but we still improve upon the state of the art for single digit\\nrecognition. This demonstrates that the system is able to localize individual\\ndigits on this dataset.\\n4) Other systems for transcription represent the concept of the sequence external\\nto the neural net, i.e. the sequence parsing is handled by post-processing techniques\\nsuch as HMM inference, non-maxima suppression, etc. Only our approach trains a neural\\nnet with an internal concept of a sequence.\\n5) Our system uses *no post-processing at all* but only pre-processing, while other\\nsystems use both pre-processing and post-processing.\\n\\n\\n\\n'Anonymous dee9' suggests that we should have given the accuracy number with and\\nwithout dropout for our internal dataset to prove that 'we did not need to train\\nwith dropout'. We agree that this claim is overreaching, we changed the paper to\\nsay that we were not seeing large overfitting and decided to not use dropout primarily\\nin order to speed up the training, which is time-consuming for the larger models.\"}" ] }
CRge-EDLedRUr
Efficient Visual Coding: From Retina To V2
[ "Honghao Shan", "Garrison Cottrell" ]
The human visual system has a hierarchical structure consisting of layers of processing, such as the retina, V1, V2, etc. Understanding the functional roles of these visual processing layers would help to integrate the psychophysiological and neurophysiological models into a consistent theory of human vision, and would also provide insights to computer vision research. One classical theory of the early visual pathway hypothesizes that it serves to capture the statistical structure of the visual inputs by efficiently coding the visual information in its outputs. Until recently, most computational models following this theory have focused upon explaining the receptive field properties of one or two visual layers. Recent work in deep networks has eliminated this concern, however, there is till the retinal layer to consider. Here we improve on a previously-described hierarchical model Recursive ICA (RICA) [1] which starts with PCA, followed by a layer of sparse coding or ICA, followed by a component-wise nonlinearity derived from considerations of the variable distributions expected by ICA. This process is then repeated. In this work, we improve on this model by using a new version of sparse PCA (sPCA), which results in biologically-plausible receptive fields for both the sPCA and ICA/sparse coding. When applied to natural image patches, our model learns visual features exhibiting the receptive field properties of retinal ganglion cells/lateral geniculate nucleus (LGN) cells, V1 simple cells, V1 complex cells, and V2 cells. Our work provides predictions for experimental neuroscience studies. For example, our result suggests that a previous neurophysiological study improperly discarded some of their recorded neurons; we predict that their discarded neurons capture the shape contour of objects.
[ "retina", "efficient visual coding", "receptive field properties", "ica", "work", "model", "spca", "human visual system", "hierarchical structure", "layers" ]
submitted, no decision
https://openreview.net/pdf?id=CRge-EDLedRUr
https://openreview.net/forum?id=CRge-EDLedRUr
ICLR.cc/2014/conference
2014
{ "note_id": [ "3klAk7rVWj9ot", "vQ3TQASdEivJY", "KdwAKEmGTQb25" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1391821200000, 1392380220000, 1391734560000 ], "note_signatures": [ [ "anonymous reviewer 0e4f" ], [ "anonymous reviewer 912a" ], [ "anonymous reviewer 0ba9" ] ], "structured_content_str": [ "{\"title\": \"review of Efficient Visual Coding: From Retina To V2\", \"review\": \"Review Efficient visual coding: from retina to V2:\\n\\nThis paper tweaks the recursive ICA model (RICA). RICA made ICA stackable by combining PCA and ICA, and applying a component-wise nonlinearity to the outputs of each layer. This version replaces PCA by sparse PCA.\\n\\nWhen trained on natural images, the updated pipeline produces standard images of local center-surround receptive fields, and oriented edge detectors, and also complex-cell-like units on the second layer.\\u00a0\\n\\nsPCA is not new, nor is RICA, so there isn't much of a technical contribution. The results look good and the overall system is simple and stackable. There isn't much new but other researchers might want to see what representation this system learns.\\n\\nHowever, I would like to see:\\u00a0\\n\\n- a discussion of how the results after training on natural images differ from RICA: is this only that the lower layer now has local-center-surround receptive fields?\\n- many more references to work by other people that also used deep architectures to create V2-like cells; the 6 references given here are woefully insufficient and this work cannot be presented in a vacuum and cite only the authors' own RICA. e.g.:\\n\\nKarklin and Lewicki,\\u00a0Learning higher-order structures in natural images.\", \"network\": \"Computation in Neural Systems, 14:483\\u2013499, 2003\\n\\nKevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato and Yann LeCun: What is the Best Multi-Stage Architecture for Object Recognition?, Proc. International Conference on Computer Vision (ICCV'09), IEEE, 2009\\n\\nH Lee, C Ekanadham, A Ng\\nSparse deep belief net model for visual area V2\\nAdvances in neural information processing systems 20, 873-880\"}", "{\"title\": \"review of Efficient Visual Coding: From Retina To V2\", \"review\": \"This paper expands upon the authors previous work on recursive ICA. Here they explore the application if sparse PCA (where the weights have a sparsity constraint), followed by ICA. The results seem to similar to what they have shown before, and also to what was learned by Karklin & Lewicki's model - i.e., grouping of oriented units by orientation and position. I find the previous work on RICA very interesting, as is this paper, but it is not entirely clear what is learned here beyond the previous work.\"}", "{\"title\": \"review of Efficient Visual Coding: From Retina To V2\", \"review\": \"The paper applies the recursive ICA algorithm to visual data and describes the connections between the layers in the model to neural representations. The RICA algorithm is modified from its original form to utilize the author's previous innovation of 'sparse PCA.' The results are conveyed through a few depictions of the receptive fields and some qualitative links to neurophysiology.\", \"contributions\": \"> The authors have updated the RICA algorithm to utilize SPCA instead of PCA.\\n> The authors claim that sPCA on the outputs of the first-layer ICA, produces V1-complex cells.\\n> The authors suggest that some unoriented neurons in a V2 experiment correspond to a subset of their second ICA layer.\\n\\nOverall, this is an interesting direction and there are signs of promising results. However, the poor exposition, preliminary nature of the result, and lack of quantitative matching to neurophysiology (or other strong metrics for differentiation/novelty), indicate that this work is not yet ready for publication/acceptance. The update of RICA to use SPCA is a minor innovation, and the claims about V1 complex cells and V2 are not well supported by experiments (matching to neural findings). In the following I provide some feedback that I hope will assist the authors in this work and a future publication of this work.\\n\\nIn the background, more attention should be paid to similar techniques to RICA. For example:\\nChen and Gopinath 2001 Gaussianization. NIPS.\\nLyu and Simoncelli 2009, Nonlinear extraction of 'Independent Components' of natural images using radial Gaussianization. Neural Comp.\", \"and_there_are_a_number_of_works_that_produce_findings_similar_to_your_second_ica_layer\": \"Y. Karklin and M. S. Lewicki, A hierarchical Bayesian model for learning non-linear statistical regularities in non-stationary natural signals, Neural Computation, 2005.\\nA. Hyv\\u00e4rinen, M. Gutmann and P.O. Hoyer. Statistical model of natural stimuli predicts edge-like pooling of spatial frequency channels in V2. BMC Neuroscience, 6:12, 2005. \\nHonglak Lee, Chaitu Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual area V2. NIPS 20, 2008.\\nCF Cadieu, BA Olshausen, Learning intermediate-level representations of form and motion from natural movies. Neural computation 24 (4), 827-866\\nThese should be discussed.\", \"some_points_on_exposition\": \"> be careful when referring between human visual system and then relying on non-human primate data. You seem to use these different species interchangeably\\n> your statement that 'a previous neurophysiological study improperly discarded some of their recorded neurons' could be rephrased. I think you can be more fair in discussing the actions of our colleagues.\\n> You repeated state 1st layer ICA algorithms (including yours) learn edge/bar shaped receptive fields. This statement is false. The generative fields (and receptive fields) resemble Gabor functions. Gabor functions are not edge or bar shaped. They are localized in space, orientation and position. While edges/bars are broadband. Don't fall into this sloppy language. Note also that results in neurophys. indicate that V1 are not edge/bar selective, but selective for stimuli localized in space, orientation and position (just like Gabors, and unlike edges/bars).\\n> you use both 'sPCA' and 'SPCA'.\\n> you are missing citations that you refer to in the text.\\n> the term 'autoencoder' refers to a specific class of models and I do not believe your optimization falls into this class.\\n\\nMy major suggestion is to spend more time on the link between the properties of the intermediate layers of your model and the neurophysiology literature. There is a tremendous amount of data on V1-complex cells and you can produce non-trivial (meaning you need a control) links between your model and these quantitative findings. While there are fewer results on V2, there are quantitative experiments to run on your model. Your current results are too qualitative and require the reader to make leaps from the receptive field depictions you use and an experimental result using certain stimuli, plotted in a completely different methodology (Figure 5 and 6). The experiment in Figure 4 is a step in the right direction, but showing one cell isn't sufficient, there is no control, and their are quantitative distribution metrics that are relevant in the Chen, Han, Poo and Dan PNAS 2007 paper (e.g. Figure 5 in that paper).\"}" ] }
00Rp6XTNJq0GY
Adaptive Feature Ranking for Unsupervised Transfer Learning
[ "Son N. Tran", "Artur d'Avila Garcez" ]
Transfer Learning is concerned with the application of knowledge gained from solving a problem to a different but related problem domain. In this paper, we propose a method and efficient algorithm for ranking and selecting representations from a Restricted Boltzmann Machine trained on a source domain to be transferred onto a target domain. Experiments carried out using the MNIST, ICDAR and TiCC image datasets show that the proposed adaptive feature ranking and transfer learning method offers statistically significant improvements on the training of RBMs. Our method is general in that the knowledge chosen by the ranking function does not depend on its relation to any specific target domain, and it works with unsupervised learning and knowledge-based transfer.
[ "adaptive feature", "unsupervised transfer", "knowledge", "transfer learning", "application", "problem", "different", "related problem domain", "efficient algorithm", "representations" ]
submitted, no decision
https://openreview.net/pdf?id=00Rp6XTNJq0GY
https://openreview.net/forum?id=00Rp6XTNJq0GY
ICLR.cc/2014/conference
2014
{ "note_id": [ "Z_q4Z3Gi4SywC", "DEpwCBOMgsKJr", "GgAKg-XrKUaHf", "zz89zXqT0UBuO", "vVxEQxc6YsvBo", "DDmlwSIalchBA", "GGSwoM3J_-WZg", "aj9769n_gHUqJ" ], "note_type": [ "review", "comment", "comment", "review", "comment", "comment", "review", "comment" ], "note_created": [ 1393001040000, 1393001040000, 1393988160000, 1391719740000, 1394418000000, 1394417280000, 1391662980000, 1393000980000 ], "note_signatures": [ [ "Son Tran" ], [ "Son Tran" ], [ "anonymous reviewer 656d" ], [ "anonymous reviewer a8e0" ], [ "Son Tran" ], [ "Son Tran" ], [ "anonymous reviewer 656d" ], [ "Son Tran" ] ], "structured_content_str": [ "{\"review\": \"We thank the reviewer for the comments, and we have carried out further experiments as suggested, as follows: we\\u2019ve updated Figures 4 and 5 to include a comparison between pruning of low-scoring units and pruning of high-scoring units.\\n \\nThe results seem to confirm that high-scored units capture more relevant information. Furthermore, Figure 7 indicates that high-scored units are more significant for transfer learning. Therefore, the role of high-scored units seems relevant for transfer learning. \\n\\nFor transfer low-scored features, we performed experiments follows what we have done with transferring high-scored features (we use the validation sets to select the features for transfer, but in this case the features are ranked from low-scores to high-scores). The results show that the highest accuracies were achieved only when a large number of high-scored features are among those which have been transferred.\", \"mnist_30k\": \"TiCC_w_A : 76.99% \\u00b1 0.58 (when transfer 2000/2000 units)\", \"mnist_5k\": \"TiCC_d : 65.19% \\u00b1 0.15 (when transfer 500/500 units)\\n\\nTo the best of our knowledge, most connectionist transfer learning use labels in the source domain or self-taught learning [1 2 3 4]. For the task of unsupervised representation transfer learning, as proposed in this paper, we have therefore chosen to focus on comparisons with our closest competitor: self-taught learning [1]. In addition, we propose to study how much knowledge should be transferred to the target domain, which has not been studied yet in self-taught mode.\\n \\n[1] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the 24th international conference on Machine learning, ICML \\u201907, page 759766, New York, NY, USA, 2007. ACM\\n[2] Gr\\u00e9goire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian J. Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, Pascal Vincent, Aaron C. Courville, James Bergstra: Unsupervised and Transfer Learning Challenge: a Deep Learning Approach. ICML Unsupervised and Transfer Learning 2012: 97-110\\n[3] WEI, B.; PAL, C.. Heterogeneous Transfer Learning with RBMs. AAAI Conference on Artificial Intelligence, North America, aug. 2011\\n[4] Honglak Lee, Chaitanya Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual area v2. In Advances in Neural Information Processing Systems. MIT Press, 2008.\"}", "{\"reply\": \"[1] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the 24th international conference on Machine learning, ICML \\u201907, page 759766, New York, NY, USA, 2007. ACM\\n[2] Gr\\u00e9goire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian J. Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, Pascal Vincent, Aaron C. Courville, James Bergstra: Unsupervised and Transfer Learning Challenge: a Deep Learning Approach. ICML Unsupervised and Transfer Learning 2012: 97-110\\n[3] WEI, B.; PAL, C.. Heterogeneous Transfer Learning with RBMs. AAAI Conference on Artificial Intelligence, North America, aug. 2011\\n[4] Honglak Lee, Chaitanya Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual area v2. In Advances in Neural Information Processing Systems. MIT Press, 2008.\"}", "{\"reply\": \"The complete reference is: Yoshua Bengio, 'Deep Learning of Representations for Unsupervised and Transfer Learning', JMLR: Workshop and Conference Proceedings 7 (2011) 1--20, available at http://www.iro.umontreal.ca/~lisa/pointeurs/DL_tutorial.pdf\\n\\nI found these new results using PCA convincing that the proposed ranking function has an advantage for transfer learning.\\n\\nIf the filter bases were normalized to 0-1 using min-max in Figure 2b, each filter should have at least one perfectly black and one perfectly white pixel, which I can not see in the figure. Is there a reason for this?\"}", "{\"title\": \"review of Adaptive Feature Ranking for Unsupervised Transfer Learning\", \"review\": \"The submission proposes a scoring method for RBMs to rank hidden units according to their information content, and to use that ranking to prune networks and to do transfer learning. The transfer learning mechanism uses the ranking in the original domain to choose which hidden units to use when training in the target domain, and then uses the output of those transferred units to affect the training of the added units.\\n\\nThe submission is clearly written, has relevance and interest to the ICLR community, and a number of experiments are described that validate the method. The results are promising, however there are a number of missing experiments that would have helped to validate the method. An important baseline experiment would be to compare the pruning of low-scoring units with the pruning of randomly selected, or even high-scoring units. Similarly, with the transfer learning experiments, it would be important to verify that the author's metric for replacement of sub-networks is optimal, or at least better than random choice. The submission also suffers from not making comparisons to other approaches to transfer learning in neural networks.\", \"pros\": \"well-written, relevant and interesting\", \"cons\": \"simple approach, not adequately validated, not adequately compared against other transfer learning approaches.\"}", "{\"reply\": \"The visualisation is the same if we use logistic function to normalize the filter bases.\"}", "{\"reply\": \"In order to treat all filter bases equally we used the min,max from all of them. The visualization makes sense because the high-scored bases seem to capture more information about the domain while leaving the low-scored ones almost empty.\"}", "{\"title\": \"review of Adaptive Feature Ranking for Unsupervised Transfer Learning\", \"review\": [\"This paper presents a feature ranking method for RBMs and a method to transfer RBM representations from a source domain to a target domain in an adaptive manner. While the method achieves good performance when transfering between very similar domains, the paper suffers from the following problems:\", \"The proposed ranking function is equivalent to the L1 norm of the weight vector associated with each hidden unit, which makes the method qualitatively similar to PCA in that it prioritizes the components that explain the most variance in the data (Bengio, JMLR 7 (2011) 1--20). However, it is not obvious from the experiments that the proposed ranking has an advantage for transfer learning. A connection with the existing literature also seems lacking.\", \"To the best of the reviewer's knowledge, the adaptive learning method for RBMs (with theta=1) is novel, but many very similar approaches to transfer learning using RBMs already exist and, once again, a review and comparison with competing approaches would be crucial.\", \"Despite the simplicity of the concepts presented, I have found the paper confusing at times. For example, what exact setup does each row correspond to in Tables 3 and 4? What is the difference between the last and second to last rows?\", \"Finally, the experimental section considers only the task of character recognition.\"], \"other_questions\": [\"In Figure 2(b), would we get a different interpretation if the fitler bases were normalized? Using the L1 norm of the bases for scoring, it is hardly surprising that the low-score bases have a lower magnitude when plotted on the same scale.\", \"In Figure 3 (sparse RBM), are the bases normalized by the L2 norm?\"]}", "{\"reply\": \"We thank the reviewer for the suggestion of the JMLR paper. Unfortunately we could not find the paper as referred to by the reviewer; please could you point us to the complete reference?\\n \\nIn what concerns comparisons with competing approaches, we discuss a number of approaches in our literature review. In particular, [2, 3] transfer the knowledge selected specifically based on the target, and (in some cases) with the provision of labels in the source domain. Instead, in this paper we are interested in selecting the representations that can be transferred in general to the target without the provision of labels in the source domain. This is similar to self-taught learning [1, 4], and therefore, we have focused on comparisons with self-taught learning, showing that improvements can be achieved.\\n \\nWe hope that this paper will trigger further study and experimental evaluations of such an unsupervised transfer learning method.\\n \\nWe have run experiments using PCA, as suggested. The results, reported below, are similar to those obtained by sparse coding [1], as reported in Figure 7. This seems to confirm our analysis.\\n \\npca icdar 46.23% compared to 40.43% \\u00b1 0.328 from our approach\\npca ticc 72.88% compared to 77.56 \\u00b1 0.564 from our approach\\npca tialp 58.99% compared to 63.00% \\u00b1 0.160 from our approach\\npca tidig 57.85% compared to 65.82% \\u00b1 0.262 from our approach\\n \\nWe have used character recognition problems in our experiments, but also writer recognition, using the TiCC_w dataset.\\n \\nIn Tables 3 and 4, each row shows the classification accuracies of a transfer method on a number of target domains. The last and second to last rows contain the accuracies on target domains for RBMs trained using adaptive learning, as proposed in Section 3. The difference is that in the second to last row only features from the additional hidden units are taken into account (see Figure 1), while in the last row, the entire hidden layer is used.\\n \\nNotice that the filter bases are scored and ranked before being visualized. In Figures 2a and 2b, we use min-max to normalize the bases to 0-1. In Figure 3a and 3b there is no direct link between the visualization and the scores since, as discussed, PCA is used for preprocessing the input images in this experiment. After scoring and inverse transforming the bases to the pixel space, the min-max norm is applied to scale the data to 0-1.\"}" ] }
NR4KjDE0w9RXD
Improving Deep Neural Networks with Probabilistic Maxout Units
[ "Jost Tobias Springenberg", "Martin Riedmiller" ]
We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).
[ "deep neural networks", "probabilistic maxout units", "maxout unit", "probabilistic variant", "success", "maxout", "favorable performance", "dropout", "linear units", "fact" ]
submitted, no decision
https://openreview.net/pdf?id=NR4KjDE0w9RXD
https://openreview.net/forum?id=NR4KjDE0w9RXD
ICLR.cc/2014/conference
2014
{ "note_id": [ "ttg76KBxN16Pj", "EEYZlvk8Zyz5h", "QTz-TsRb-_Qds", "gu83dns9QXd1k", "SSRsSpe-wg_G6", "aaq01-SNy58mX", "FWcRWk_B136Of", "D7M6i8b56ziN8", "wH1_HGT0Enlq-", "r5RFq2vpx8rfK", "wg4jUf9Kjiw0E" ], "note_type": [ "comment", "review", "review", "review", "review", "review", "review", "review", "review", "review", "comment" ], "note_created": [ 1393513260000, 1392997560000, 1392035940000, 1393474680000, 1392738660000, 1392859380000, 1393552740000, 1393514100000, 1393474020000, 1391830920000, 1393889580000 ], "note_signatures": [ [ "Jost Tobias Springenberg" ], [ "Jost Tobias Springenberg" ], [ "anonymous reviewer 4eb4" ], [ "Ian Goodfellow" ], [ "Jost Tobias Springenberg" ], [ "anonymous reviewer 2618" ], [ "Jost Tobias Springenberg" ], [ "Jost Tobias Springenberg" ], [ "Ian Goodfellow" ], [ "anonymous reviewer f3f1" ], [ "Ian Goodfellow" ] ], "structured_content_str": [ "{\"reply\": \"Hi Ian,\\nthe maxout baseline is the maxout network without any sampling, i.e. a\\nsimple forward pass through the net.\\n\\nEquation 3 was used for the two other curves (maxout + sampling,\\nprobout) for both networks the dropout effect was removed by the\\n`halving the weights trick`. I agree that it is not surprising that\\nmaxout performs worse in combination with the sampling procedure as it\\nwas never trained to compensate for/utilize the stochastic sampling\\nprocedure. This experiment was meant as a simple control experiment and not\\nmuch more. I did not mean to convey the idea that the result is\\nsurprising or is a disatvantage of the maxout model. \\n\\nThe idea of also investigating different dropout masks was not really\\nthe scope of this experiment, however it is a good idea and I will try\\nto setup such an additional experiment. \\n\\nBtw, I will later today also report back with a few results from my\\nlarge hyperparameter search which are somewhat interesting. I would\\nbe very happy if you could comment on them as well.\"}", "{\"review\": [\"We want to point out that an updated version of the paper is available on arXiv. The main changes are:\", \"Most minor comments of reviewers Anonymous f3f1 and Anonymous 4eb4 are adressed in the new version\", \"The experiments section has been changed to more clearly reflect statistically ties between maxout and probout in the results\", \"An additional experiment on invariance properties was added\"]}", "{\"title\": \"review of Improving Deep Neural Networks with Probabilistic Maxout Units\", \"review\": \"This manuscript extends the recently proposed \\u201cmaxout\\u201d scheme for neural networks by making the linear subspace pooling stochastic, i.e. by parameterizing the probability of activating each filter as the softmax of the filter responses and then sampling from the resulting discrete distribution. Experimental results are presented on CIFAR10/CIFAR100/SVHN and contrasted with the original maxout work.\", \"novelty\": \"low\", \"quality\": \"low to medium\", \"pros\": [\"The idea is somewhat interesting and worth trying, and the manuscript itself is generally well-written\", \"Experiments and hyperparameter choices are generally described in detail, including software packages used\", \"Attempts a fair-handed comparison between the method and the one they are building off (though see below)\"], \"cons\": [\"The benchmark results are quite lackluster: the improvements are of questionable statistical significance (see below)\", \"This questionable gain comes at a >=10x increase in computational cost.\", \"The experimental comparisons have several shortcomings (not all of them strictly advantageous to the proposed method, however -- see below).\", \"The abstract mentions enhancing invariance properties as the goal but there are no attempts to quantify the degree of learned invariance as has been examined in the literature. See, for example, Goodfellow et al\\u2019s \\u201cMeasuring Invariances in Deep Networks\\u201d from NIPS 2009 for one such attempt.\"], \"detailed_comments\": \"The procedure doesn\\u2019t yield a single deterministic model at test time, which may be a significant practical drawback as compared with conventional maxout or relu networks.\\n\\nStill, a more significant drawback to the proposed method is that the test time computation involves several forward propagations through the entire network in order to have a noticeable (but statistically negligible) advantage over maxout. Figure 3b suggests that, on a relatively simple dataset, 10 or more fprops per example may be necessary to yield a significant advantage over the maxout baseline. This is in addition to the cost additional cost incurred by sampling pseudorandom variates as part of the inference process.\\n\\nA quick computation of a confidence interval (based on the confidence interval of a Bernoulli parameter, i.e. the probability of classifying an example incorrectly) for both the results reported in Goodfellow et al (2013) and this work reveals that the confidence intervals can be seen to overlap significantly:\\n\\n- CIFAR10 Maxout (no augmentation): 11.68% +/- 0.63%, Probout: 11.35% +/- 0.62%\\n- CIFAR100 Maxout: 38.57% +/- 0.95%, Probout: 38.14% +/- 0.95%\\n- SVHN Maxout: 2.47% +/- 0.19%, Probout: 2.39% +/- 0.19%\\n\\nThe same comparison between maxout and the competitors reported in the original work yields non-overlapping confidence intervals for all tasks above. The original maxout work does not achieve a statistically significant improvement over the existing state of the art for CIFAR10, and these authors report no improvement, suggesting the task is sufficiently well regularized by the data augmentation that their complementary regularization does not help.\", \"hyperparameter_search\": \"while the authors reused the exact same architectures and other hyperparameters employed by the original maxout manuscript in an attempt to be fair-handed, I believe this is, in truth, a mistake that disadvantages their method in the comparison. Certain hyperparameters, critically the learning rate and momentum, will be very sensitive to changes in the learning dynamics such as those introduced by the stochastic generalization of the activation function. Even the optimal network architecture may not be the same. The way I would suggest approaching this is with a randomized hyperparameter search (even one in the neighbourhood of the original settings) wherein the same hyperparameters are tried for both methods, and each point in the shared hyperparameter space is further optimized via randomized search over the hyperparameters specific to probout. This gives the method in question a fairer shot by not insisting that the optimal hyperparameters for maxout be the same as those for probout (there is no a priori reason that they should be).\\n\\nThe choice of temperature schedules also seems like an area that should be further explored. It seems odd that increasing the temperature during training should help (the paper does not specify the length of the linear (inverse) decrease period, this should be noted). What is the intuition for why this helps? How could one validate these intuitions experimentally?\\n\\nThe claim that the stochastic procedure \\u201cprevents [filters] from being unused by the network\\u201d is dubious. Section 8.2 of the original maxout paper suggests that dropout SGD training alone is remarkably effective in this respect. This claim should be quantitatively investigated and verified if it is to be made at all.\\n\\nFinally, the paper motivates around probout units learning better invariances, but no attempt is made at quantitatively validating this claim. As it stands, there is a qualitative assessment of the first layer convolutional filters learned, noting that they appear to resemble easily recognizable transformations & quadrature pairs moreso than those learned by vanilla maxout, but I find this somewhat unconvincing. Just because the invariances learned by vanilla maxout are not always obvious to the human practitioner does not mean they are not useful invariances for the model to encode.\"}", "{\"review\": \"For figure 4, I think it's important to note that your evaluation method only returns a low number if *all* of the units in a layer are invariant to the studied transformation. If the layer has a factored representation that represents the studied transformation with one set of units and represents other properties of the input with another, disjoint set of units, there will still be a large cosine difference between the representation of two transformed versions of the input, because the change in the portion of the representation that corresponds to the studied property will result in a change of the normalization constant and thus a change in all of the code elements. I think it would make more sense to normalize each unit separately based on a statistic such as its standard deviation across the training set, and then plot histograms showing how much each normalized unit changes as you vary the input. This way you still control for the possibility that the models operate on different scales, but you can also tell if the representation is factored or not.\"}", "{\"review\": \"First of all we want to thank both reviewers for their detailed comments.\\nBefore giving a more elaborate response we want to mention that we have incorporated your suggestions in a new version of the paper which should appear on arxiv as of Feb 19.\", \"to_both_reviewers\": \"As already acknowledged in the paper we agree that our proposed method comes with an attached high computational cost during inference. While this cost, as you mentioned, can be seen as an important practical drawback we believe that explorative research towards novel stochastic regularization techniques (for which an efficient inference procedure is not immediately available) still constitutes a worthwhile research direction from which new, more efficient, regularization techniques could be obtained in the future.\\n\\nIn addition to this increase in computational cost you point out that the improvement over maxout achieved by our approach is minor/non-significant. Although this is acknowledged at the end of the introduction and in the discussion, we have now additionally rephrased the experiments section to reflect this fact more clearly. We want to reiterate that since both methods are tightly coupled - in both their motivation and computational properties as well as our parameter choices - we believe that our results still provide an interesting contribution and can serve as a starting point for future research on the impact of including subspace pooling in deep neural networks. \\n\\nTo reviewer 2 (Anonymous 4eb4):\\n\\nWe agree that the best hyperparameters for both maxout and probout cannot generally be assumed to coincide. However, as a full hyperparameter search for both methods on all datasets requires significant computational resources we decided to stick to the original parameters in an attempt to make a comparison that is at worst biased towards the maxout results (as we were concerned to not make an unfair comparison to previous results). We are currently running a parameter search on both CIFAR-10 and SVHN in a similar manner as you suggested and will include the results in an additional updated version in the coming days. \\n\\nAs you point out, our investigation of the invariance properties of the network in the original paper is only a qualitative one. We performed an additional quantitative analysis in the new version of the paper, comparing invariance properties of maxout and probout networks in a manner similar to [1,2].\\n\\n[1] Koray Kavukcuoglu, Marc'Aurelio Ranzato, Rob Fergus, and Yann LeCun, 'Learning Invariant Features through Topographic Filter Maps', in Proc. International Conference on Computer Vision and Pattern Recognition (CVPR'09), 2009.\\n[2] Visualizing and Understanding Convolutional Networks M.D. Zeiler, R. Fergus Arxiv 1311.2901 (Nov 28, 2013)\"}", "{\"title\": \"review of Improving Deep Neural Networks with Probabilistic Maxout Units\", \"review\": \"Authors propose replacing max operation of maxout with probabilistic version, same what Zeiler did for spatial max-pooling in 'stochastic pooling' paper.\\n\\nFor inference, they run the network 50 times using the same sampling procedure, and average the outputs to get probabilities.\\n\\nThey add a 'temperature' parameter to allow to interpolate between maxout and 'uniform random' sampling, and find that annealing the temperature helps.\\n\\nAlso the analyze the per-layer optimal setting of the temperature and found that stochasticity is most important in first 2 convolutional layers, whereas in the last 2 layers, using probout did not give any advantage over maxout.\\n\\nThere are a few minor corrections, but overall this is a solid submission with high relevance for ICLR.\", \"issues\": [\"Minor style issue: use \\text or mbox for 'softmax' and 'multinomial' inside formula\", \"Table 3: could add 2.16% error rate obtained by another ICLR submission -- 'Multi-digit Number Recognition from Street View\", \"Imagery using Deep Convolutional Neural Networks'\", \"There's an explanation for why first layers benefit the most from stochasticity -- how stochasticity 'pulls the units in the subspace closer together.' This is unclear to me, and I would recommend expanding/explaining this view.\"]}", "{\"review\": \"As mentioned in a previous comment I want to report back with\\npreliminary results from the hyperparameter search I am conducting. \\nEssentially I am optimizing over all notable parameters except the\\nnumber of units in each layer - in order to fix the model size \\n(More specifically this includes: learning rate,\\nmomentum, pooling shape, pooling stride, size of the convolutional\\nkernels). \\n\\nMy preliminary results suggest that on CIFAR-10 the best probout\\nnetwork trained without data augmentation can achieve an error rate of <= 10.7% .\\nDuring the hyperparameter search I however also became aware that the\\nparameter search carried out for the original maxout paper was\\nprobably not very exhaustive, as improved performance can also\\nbe achieved with a vanilla maxout model. \\nWhile the difference in performance between probout and maxout appears\\nto be observable for all hyperparameter settings, the best maxout model \\nI obtained so far achieves 10.92% error on CIFAR-10 without data\\naugmentation.\\n\\nInterestingly, the best hyperparameter settings found so far\\nare much closer to the parameters that seem to be used in the 'Network\\nin Network' paper (also submitted to ICLR) than the original settings\\nused in the maxout paper (and maxout seems to perform on par\\nwith 'Network in Network' when a fully connected layer is used after the convolutional\\nlayers). I will cross-link these results in the discussion to that paper. \\n\\nI will include the results into the paper as soon as the full\\nparameter search is finished and disclose the parameters found.\\n\\nIt would be interesting if Ian could jump in with a short comment on\\nhow exactly he fixed hyperparameters for his original work.\"}", "{\"review\": \"'For figure 4, I think it's important to note that your evaluation\\nmethod only returns a low number if *all* of the units in a layer are\\ninvariant to the studied transformation.'\\n\\nI agree and will add a sentence to the paper noting this.\\nI actually thought about whether there is a better way to produce such\\nplots than what was done in prior work as well. Your suggestions\\nseems solid and would be a nice addition to the whole layer analysis\\nthat is depicted in Figure 4. I will try to get around to implementing it\\nthis way (although this might have to wait a few days).\"}", "{\"review\": \"For fig 3b, what sampling distribution did you use for the maxout baseline? Are you sampling from the distribution defined in eqn 3? Why is this a meaningful thing to evaluate for maxout, which hasn't been trained to know you're going to sample in that particular way? It seems like it would be more fair to sample different dropout masks, since each of the maxout subnetworks have actually been trained to do the classification task. It's not very surprising to find that a neural net that has been trained to do task X performs better than a neural net that has not been trained to do task X.\"}", "{\"title\": \"review of Improving Deep Neural Networks with Probabilistic Maxout Units\", \"review\": \"The paper introduces a generalization of the maxout unit, called probout. The output of a maxout unit is defined as the maximum of a set of linear filter responses. The output of a probout unit is sampled from a softmax defined on the linear responses. For vanishing temperature this turns into the maxout response.\\n\\nWhile the idea is probably not revolutionary, it seems reasonable and it seems to work fairly well on the datasets tried in the paper. \\n\\nIt is a bit unfortunate that unlike for dropout/maxout, there does not seem to be a closed-form, deterministic activation function at test time that works well. At least the authors did not find any.\\n\\nInstead they propose to average multiple outputs. This makes probout networks much slower at test time than a maxout network. It also puts into perspective the improved classification results over maxout. It seems unlikely that the common practice of halving weights at test time is exactly the optimal way of making predictions for a model trained with dropout. And it is conceivable that some kind of model averaging will be able to improve performance for those networks, too.\\n\\nIt is interesting that probout units with group size two tend to yield filter pairs in quadrature relationship, and much more clearly so than maxout. In that respect they behave similar to average pooling units. It would be interesting to investigate this further in future work.\"}", "{\"reply\": \"My co-author David Warde-Farley chose the final hyperparameters for CIFAR-10. Our goal was simply to improve upon the state of the art with the limited computational resources that were available to us, so we did not run an exhaustive, automated search. Instead, both David and I guessed a small number of hyperparameter settings by hand, and we stopped working on that particular task after we started to get diminishing marginal utility from our time spent on it. The hyperparameters for the case with no data augmentation are probably particularly poor. We just used the best hyperparameters from the data augmentation case.\\n\\nI think if you want to do an explicit comparison between two methods, like maxout and probabilistic maxout, it's best to do an automated search, like we did for our comparison between maxout and rectifiers. Unfortunately, when we did this automated search, we had to do it for a small maxout model, since we wanted to compare maxout against a significantly larger rectifier model.\"}" ] }
3RMnfrH_Fi8eU
Fast Training of Convolutional Networks through FFTs
[ "Michael Mathieu", "Mikael Henaff", "Yann LeCun" ]
Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.
[ "training", "convolutional networks", "order", "ffts fast training", "ffts convolutional networks", "employed architectures", "computer vision", "machine learning", "ability", "complex functions" ]
submitted, no decision
https://openreview.net/pdf?id=3RMnfrH_Fi8eU
https://openreview.net/forum?id=3RMnfrH_Fi8eU
ICLR.cc/2014/conference
2014
{ "note_id": [ "kiozwKfQtAiCE", "mqDyt0Csiue0G", "rYndr5WNSIv0s", "jtW_tR3l_mtRJ", "CJ8OCn4eysJBk", "EEW9_MBSp3EMG", "hh1zh7rGzdh7x", "xxSIU3JuIPx-V", "wwk0Lbjg6BOSV", "_1Zn_ktRn7elP", "6Tyk61XEml3QB", "lw7vlouJvPwxn", "GGCLrfwpU8FfW" ], "note_type": [ "comment", "review", "review", "review", "review", "review", "review", "review", "review", "review", "comment", "review", "review" ], "note_created": [ 1392792600000, 1390502640000, 1388894400000, 1424830320000, 1392061980000, 1388895300000, 1424830320000, 1391311620000, 1390978260000, 1390978320000, 1392792900000, 1391820480000, 1391311620000 ], "note_signatures": [ [ "Mikael Henaff" ], [ "Rodrigo Benenson" ], [ "Soumith Chintala" ], [ "victor liparsov" ], [ "anonymous reviewer 9161" ], [ "Soumith Chintala" ], [ "victor liparsov" ], [ "Mikael Henaff" ], [ "anonymous reviewer c809" ], [ "anonymous reviewer c809" ], [ "Mikael Henaff" ], [ "anonymous reviewer 3b1a" ], [ "Mikael Henaff" ] ], "structured_content_str": [ "{\"reply\": \"Thank you for the feedback. To answer your comments:\\n\\n -The paper does not explain when spatial-domain calculations would be faster\\n Our analysis in Section 2.2 compares the theoretical complexity of spatial-domain calculations to the Fourier-based method, and our empirical results in Section 3 compare the performance of two spatial-domain implementations (CudaConv and Torch7 (custom)) to the Fourier-based method. We added a sentence clarifying that these two implementations use the direct method in the spatial domain.\\n\\n - The paper does not discuss how the trade-offs would be different on single-core or multi-core CPUs, or on different GPUs.\\n\\n The main point is that for modern ConvNets with large numbers of input and output feature maps, we can significantly reduce the number of operations required by using the FFT-based method. This result (explained in Section 2.2) holds regardless of the architecture on which the algorithm is implemented. We performed experiments with a GPU implementation because this is the most widely used, but the general result holds regardless of whether we use a CPU or GPU.\\n\\n - Details of the Cooley-Tukey implementation are not given / No mention is made of downloadable source code, this work might be hard to reproduce\\n\\n We added a reference to the Cooley-Tukey algorithm. We will eventually make the source code available.\\n\\n - What about non-square images?\\n\\n We added a footnote explaining that the results also apply to non-square images.\\n\\n - Why use big-O notation in 2.2 when the approximate number of FLOP/s is easy to compute? Asymptotic performance isn't really the issue at hand, the relevant values of n and k are not very large. Consider falling back on big-O notation only after making it clear that the main analysis will be done on more precise runtime expressions.\\n\\n Done.\\n\\n - The phrase 'Our Approach' is surprising on page 3, because it does not seem like you are inventing a new Fourier-domain approach to convolution. Isn't the spatial domain faster sometimes, Fourier faster sometimes, and you're writing a paper about how to know which is which? \\n\\nWe are aware that the idea of performing a convolution through a Fourier transform is not new. The speedup occurs when we are doing many pairwise convolutions between two sets of matrices, so the analysis is not on the level of a single convolution but for sets of convolutions. As pointed out by another reviewer, a related idea has been explored in the 90's for accelerating inference in previously-trained models. We added a mention of this work. However, our work differs from theirs in the following ways:\\n(1) They use FFTs for inference (i.e. the fprop method only), whereas we show it can be used for all 3 training operations (fprop, backprop and gradient accumulation) as well.\\n(2) They only use FFTs for inference using a previously-trained network (i.e, they do not use it to compute the fprop during training). One reason might be that the number of feature maps used at the time was much smaller (they use 25), and the method was not effective if the filters were not precomputed offline. We use FFTs for the fprop during training and show that it yields a substantial acceleration, even when the FFTs of the filters are not precomputed. This is due to the fact that modern ConvNets have a much larger number of feature maps, which is when the FFT-based method pays off.\\n(3) They use FFTs for the first layer only (all other layers are fully connected), whereas we show that it provides acceleration at all levels.\\n\\nWe agree that the main idea is quite simple, but to our knowledge no-one in the machine learning community currently uses this method for training/inference with convnets, which makes it a new approach in our opinion.\\n\\n \\n\\n - The last paragraph of section 3 is confusing: which of your experiments use and do not use the memory-intensive FFT-reuse trick? The following sentence in particular makes the reader feel he is being duped 'All of the analysis in the previous section assumes we are using this memory-efficient approach [which you now tell is is infeasible in important practical applications]; if memory is not a constraint, our algorithm becomes faster.' Faster than what? Faster than the thing that doesn't require prohibitive amounts of memory?\\n\\n This sentence seems clear to us. By 'memory-efficient', we mean the approach that does *not* require very much extra memory. By 'All the analysis in the previous section', we are referring to the analysis in the previous section (2.2), which assumes we are using the memory-efficient approach which recomputes the FFTs at each iteration rather than storing them. Nevertheless, we reworded this and hope it is now crystal clear.\\n\\n - Page 4: when you speak of 'another means to save memory' what was the first way? (Was the first way to recompute things on demand?)\\n\\n Yes, which is described in the preceding paragraph.\\n\\n - Page 5: Figure 3: This figure is hard to understand. The axes should be labeled on the axes, and the title should contain the contents of the current caption (not the names of the axes), and the caption should help the reader to understand the significance of what is being shown.\\n\\n We fixed this.\\n\\n - Why is the Torch7 implementation called Torch7(custom), and not just Torch7?\\n\\n We mention at the beginning of Section 3 that this is a custom implementation using Torch7. This is different than the version that is shipped with Torch.\\n\\n - The memory access patterns entailed by an algorithm is at least as important for GPU performance as the number of FLOP/s. How does the Cooley-Tukey FFT algorithm work, and how did you parallelize it? These implementation details are really important for anyone trying to reproduce your experiments. / What memory layout do you recommend for feature maps and filters? This ties in with a request for more detail on the algorithm you used.\\n\\n This will be clear when we make the source code available.\"}", "{\"review\": \"Speeding up convolutional networks is a quite interesting topic. The presented method seems effective and principled.\\nThe paper reads easily however the experimental results seems somewhat lacking.\\nThe whole premise of the work is to speed-up training and testing, yet there is not even anecdotal results regarding total training time. This matters because, at the end of the day, it is the system wide performance that matters. \\n\\nA theoretical paper that only shows improvement on the number of operations, might hide the fact that the memory access is now more convolved, and thus the overall speed goes down (despite having reduced the number of operations). In the context of this paper, showing some evidence of the overall speed would be welcome.\\n\\nAt the same time, full training experiments would show that indeed, in practice, there is no degradation of the learnt model or the predicted scores. There is all kind of implementation issues that can convert theoretically identical results into different outcomes (because in a computer a + (b + c) != (a + b) + c).\\n\\nWhen using methods based on Fourier transform border effects enter into play. Could you mention how you handle these ?\\nAlso section 3 mentions an 'additional heuristic', could you detail why this is an heuristic, and what are the consequences of using it ?\\nFinally, are there plan of releasing this new training code ? Open source releases tend to increase the impact of a paper.\\n\\nOverall quite interesting work, looking forward for the results of training filters directly in the Fourier domain.\"}", "{\"review\": \"Could you please update your paper with labels for the graphs in Figure 3? It is unclear from the figures what the axes are.\"}", "{\"review\": \"Kita coba bersama\"}", "{\"title\": \"review of Fast Training of Convolutional Networks through FFTs\", \"review\": \"'Fast Training of Convolutional Networks through FFTs' compares Fourier-domain vs. spatial-domain convolutions in terms of speed in convnet applications.\\n \\nThe question of the relative speed of Fourier vs. Spatial convolutions is common among engineers and researchers, and to my knowledge no one has attempted to characterize (at least in convnet-specific terms) the settings when each approach is preferred. Spatial domain convolutions have been the standard in multiple implementations over 30 years of research by scores of researchers. This paper claims, surprisingly, that FFTs are nearly always better in modern convnets. At the same time, the authors of the paper introduce a strategy for FFT parallelization on GPUs that is somewhat particular to the sorts of bulk FFTs that arise in convnet training, and the conclusions are based on that implementation running on GPU hardware.\\n \\n \\nCONTRIBUTIONS\\n \\n1. Empirical comparison of spatial and Fourier convolutions for convnets\\n \\n2. A fast Cooley-Tukey FFT implementation for GPU that's well-suited to convnet application \\n \\n \\nQUALITY \\n \\nThe figures and formatting are not very polished. \\n \\n \\nPRO \\n \\n1. The paper aims at an important issue for convnet researchers \\n \\n2. The claim that FFT-based convolutions are better will be broadly interesting \\n \\n \\nCON \\n \\n1. The paper does not explain when spatial-domain calculations would be faster \\n \\n2. The paper does not discuss how the trade-offs would be different on single-core or multi-core CPUs, or on different GPUs.\\n \\n3. Details of the Cooley-Tukey implementation are not given\\n \\n4. No mention is made of downloadable source code, this work might be hard to reproduce\\n\\n\\nCOMMENTS\\n \\n- What about non-square images?\\n \\n- Why use big-O notation in 2.2 when the approximate number of FLOP/s is easy to compute? Asymptotic performance isn't really the issue at hand, the relevant values of n and k are not very large. Consider falling back on big-O notation only after making it clear that the main analysis will be done on more precise runtime expressions.\\n \\n- The phrase 'Our Approach' is surprising on page 3, because it does not seem like you are inventing a new Fourier-domain approach to convolution. Isn't the spatial domain faster sometimes, Fourier faster sometimes, and you're writing a paper about how to know which is which?\\n \\n- The last paragraph of section 3 is confusing: which of your experiments use and do not use the memory-intensive FFT-reuse trick? The following sentence in particular makes the reader feel he is being duped 'All of the analysis in the previous section assumes we are using this memory-efficient approach [which you now tell is is infeasible in important practical applications]; if memory is not a constraint, our algorithm becomes faster.' Faster than what? Faster than the thing that doesn't require prohibitive amounts of memory?\\n \\n- Page 4: when you speak of 'another means to save memory' what was the first way? (Was the first way to recompute things on demand?)\\n \\n- Page 5: Figure 3: This figure is hard to understand. The axes should be labeled on the axes, and the title should contain the contents of the current caption (not the names of the axes), and the caption should help the reader to understand the significance of what is being shown.\\n \\n- Why is the Torch7 implementation called Torch7(custom), and not just Torch7?\\n \\n- The memory access patterns entailed by an algorithm is at least as important for GPU performance as the number of FLOP/s. How does the Cooley-Tukey FFT algorithm work, and how did you parallelize it? These implementation details are really important for anyone trying to reproduce your experiments.\\n \\n- What memory layout do you recommend for feature maps and filters? This ties in with a request for more detail on the algorithm you used.\"}", "{\"review\": \"Training convnets on GPUs has been a challenge in terms of both time and memory constraints, more often constraining the size of the models by memory requirements per GPU rather than processing time.\\n\\nIt would be helpful to add a comparison of memory requirements for your method compared to purely spatial convolution implementations.\", \"you_mention\": \"'Also note that our method performs the same regardless of kernel size, since we pad the kernel to be the\\nsame size as the input image before applying the FFT'\\n\\nFrom my understanding without looking at your implementation, I would assume that because of this particular operation, the memory requirements would balloon, making this method impractical to even implement current state-of-the-art models for Image and audio recognition problems.\\nIt would be good to have a section talking about practical constraints, issues and how you guys think they should be handled.\"}", "{\"review\": \"Kita coba bersama\"}", "{\"review\": \"Thank you all for the constructive comments.\\n\\nSoumith,\\nWe added details as to how we addressed the memory issues. It's true that the method requires some extra memory, but you can preallocate a block of memory once and re-use it to store the frequency representations at each layer. So the amount of extra memory needed is equal to the maximum amount of memory required to store the frequency representations of a single layer. This is small compared to the amount of memory needed to store a large network.\\n\\nRodrigo and Anonymous c809,\\nWe added a number of results reporting the running times for several different configurations of image and kernel sizes, as well as different numbers of input and output feature maps. We also added the running times for a training iteration of a whole network (not just a single layer). This is to account for memory accesses, padding and other implementation details. We also mention the results of our unit tests which compare the outputs of the FFT-based convolution and the direct method (the differences are very small). Concerning the 'additional heuristic', this case is actually included in the analysis section so we removed it to avoid confusion.\\n\\nConcerning the border effects, we simply computed the circular convolution using the product of frequency representations and crop the output to only include coefficients for which the weight filter is contained within the image. We will clarify this in the paper.\\n\\nWe will also edit the graphs in Figure 3 to make the axis labels clearer.\"}", "{\"title\": \"review of Fast Training of Convolutional Networks through FFTs\", \"review\": \"The paper presents a technique for accelerating the processing of CNNs by performing training and inference in the frequency (Fourier) domain. The work argues that at a certain scale, the overhead of applying FFT and inverse-FFT is marginal relative to the overall speed gain.\\n\\nAs noted by the previous reviewers, the speedup is presented simply as that obtained for three functions that lie at the heart of each convolutional layer. It would be valuable if the speedup could also be presented in the context of a comparison to the overall training time of a CNN on a standard dataset.\\n\\nAlso noted is the lack of reference to the fact that Convolution Theorem refers to circular convolution and not linear (i.e. non-circular) convolution. It is assumed inconsequential since CNNs use neither circular convolution (weight filters do not wrap around images) nor linear convolution (weight filters are always fully contained within the image and do not 'hang off' the edges). Thus, the resulting differences between circular and linear convolution would not impact the feature map y_f. This seems to be hinted at by the n' term in section 2.2, but is not obvious.\\n\\nThe future work seems logical and would be interesting to pursue. One other direction to consider is approximations to the FFT (which there are many) that could retain most of the information needed in context of CNNs at a fraction of the computational cost.\", \"minor_editorial_issue\": \"in figure 3 the axes are noted in the title of the figure rather than as labels for the x and y axis.\"}", "{\"title\": \"review of Fast Training of Convolutional Networks through FFTs\", \"review\": \"The paper presents a technique for accelerating the processing of CNNs by performing training and inference in the frequency (Fourier) domain. The work argues that at a certain scale, the overhead of applying FFT and inverse-FFT is marginal relative to the overall speed gain.\\n\\nAs noted by the previous reviewers, the speedup is presented simply as that obtained for three functions that lie at the heart of each convolutional layer. It would be valuable if the speedup could also be presented in the context of a comparison to the overall training time of a CNN on a standard dataset.\\n\\nAlso noted is the lack of reference to the fact that Convolution Theorem refers to circular convolution and not linear (i.e. non-circular) convolution. It is assumed inconsequential since CNNs use neither circular convolution (weight filters do not wrap around images) nor linear convolution (weight filters are always fully contained within the image and do not 'hang off' the edges). Thus, the resulting differences between circular and linear convolution would not impact the feature map y_f. This seems to be hinted at by the n' term in section 2.2, but is not obvious.\\n\\nThe future work seems logical and would be interesting to pursue. One other direction to consider is approximations to the FFT (which there are many) that could retain most of the information needed in context of CNNs at a fraction of the computational cost.\", \"minor_editorial_issue\": \"in figure 3 the axes are noted in the title of the figure rather than as labels for the x and y axis.\"}", "{\"reply\": \"Thank you for the feedback. We posted an updated version of the paper which incorporates these changes.\"}", "{\"title\": \"review of Fast Training of Convolutional Networks through FFTs\", \"review\": \"The paper describes the use of FFTs to speed-up the computation during training for convolutional neural networks working on images.\\nEssentially this is presented as a pure speed-up technique and doesn't change the learning algorithm, or (in an interesting way) the representation. \\n\\nThe idea of applying FFTs to speed up image processing systems, particularly 'sliding windows' systems, is far from new and there is a large literature on this.In particular combining FFTs with Neural networks is not new,\\ne.g. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.1967\\nSome of this prior literature should be cited. \\n\\nI am not aware of any work that applys the back-propagation in the Fourier domain too. \\n\\nThe resulting speed-ups are significant for the scenario the authors are considering, and it is useful to know that the practical implementation delivers these gains. As they conclude, these results may change the way such problems are formulated by removing the pressure to use small kernels.\\n\\nExpand the caption for Figure 2. Total number of operations for what? \\n\\nFigure 3 needs units for the y axis (text says seconds?), and for the x axes - ie areal or linear pixels?\\nAlso for each of the 3 sets of graphs, there needs to be an indication of what are the values of the parameters which are held constant.\\n\\nPlease say in the text that all 3 systems (Torch, Authors' and Krizhevsky) were running on the same (which?) GPU. \\n\\nCitation for Cooley-Tukey FFT?\"}", "{\"review\": \"Thank you all for the constructive comments.\\n\\nSoumith,\\nWe added details as to how we addressed the memory issues. It's true that the method requires some extra memory, but you can preallocate a block of memory once and re-use it to store the frequency representations at each layer. So the amount of extra memory needed is equal to the maximum amount of memory required to store the frequency representations of a single layer. This is small compared to the amount of memory needed to store a large network.\\n\\nRodrigo and Anonymous c809,\\nWe added a number of results reporting the running times for several different configurations of image and kernel sizes, as well as different numbers of input and output feature maps. We also added the running times for a training iteration of a whole network (not just a single layer). This is to account for memory accesses, padding and other implementation details. We also mention the results of our unit tests which compare the outputs of the FFT-based convolution and the direct method (the differences are very small). Concerning the 'additional heuristic', this case is actually included in the analysis section so we removed it to avoid confusion.\\n\\nConcerning the border effects, we simply computed the circular convolution using the product of frequency representations and crop the output to only include coefficients for which the weight filter is contained within the image. We will clarify this in the paper.\\n\\nWe will also edit the graphs in Figure 3 to make the axis labels clearer.\"}" ] }
IrVvIL2BaXrg4
Reference Distance Estimator
[ "Yanpeng Li" ]
A theoretical study is presented for a simple linear classifier called reference distance estimator (RDE), which assigns the weight of each feature j as P(r|j)-P(r), where r is a reference feature relevant to the target class y. The analysis shows that if r performs better than random guess in predicting y and is conditionally independent with each feature j, the RDE will have the same classification performance as that from P(y|j)-P(y), a classifier trained with the gold standard y. Since the estimation of P(r|j)-P(r) does not require labeled data, under the assumption above, RDE trained with a large number of unlabeled examples would be close to that trained with infinite labeled examples. For the case the assumption does not hold, we theoretically analyze the factors that influence the closeness of the RDE to the perfect one under the assumption, and present an algorithm to select reference features and combine multiple RDEs from different reference features using both labeled and unlabeled data. The experimental results on 10 text classification tasks show that the semi-supervised learning method improves supervised methods using 5,000 labeled examples and 13 million unlabeled ones, and in many tasks, its performance is even close to a classifier trained with 13 million labeled examples. In addition, the bounds in the theorems provide good estimation of the classification performance and can be useful for new algorithm design.
[ "rde", "assumption", "feature j", "classification performance", "classifier", "data", "labeled examples" ]
submitted, no decision
https://openreview.net/pdf?id=IrVvIL2BaXrg4
https://openreview.net/forum?id=IrVvIL2BaXrg4
ICLR.cc/2014/conference
2014
{ "note_id": [ "Vx9IwXzYeBRDk", "J9nSyRXqSM9Tj", "-t2JA1q4M5znO", "vtDg5wREyrtEf", "eexW18S30F1K6", "qLyDi4fHDriEo" ], "note_type": [ "review", "review", "review", "review", "comment", "review" ], "note_created": [ 1391902020000, 1391918640000, 1391889240000, 1391864040000, 1392165900000, 1391695980000 ], "note_signatures": [ [ "anonymous reviewer 2bb3" ], [ "Yanpeng Li" ], [ "Yanpeng Li" ], [ "anonymous reviewer 7977" ], [ "Yanpeng Li" ], [ "anonymous reviewer 5b4f" ] ], "structured_content_str": [ "{\"title\": \"review of Reference Distance Estimator\", \"review\": \"This paper proposes an interesting approach: identify features which are likely to correlate with a class predictor on a small set and learn to predict the presence of this binary features on a large unlabeled dataset. The paper is however not very clear and its experimental section is far from thorough.\\n\\nThe paper is dense but misses important points. In particular,\\n - what are the modeling assumptions of RDEs?\\n - what types of data are likely to succeed?\\n\\nIt could be more fluent and better structured. In particular, the theorems follow each other without much transitions. The reader could be guided more telling what you are going to show and why, on results are going to be linked toward the final goal. I also feel that the results of supervised RDE on their own seems very good, better than SVM and logistic regression. It might be good to invest more work to propose RDEs in a supervised setting with extensive empirical comparison on reference dataset, before then extending them to semi-supervised learning. A paper with a single message is likely to be clearer. \\n\\nThe quality of its experimental section could greatly be improved. The experiments are performed on a single dataset. Reporting results on standard text classification datasets like RCV1 and other types of data would greatly improve the paper. I feel this is necessary since your results indicate that SVM and logistic regression, which have been state-of-the-art text classification algorithms for many years seem significantly outperformed by the simple supervised RDE. If this is confirmed over benchmark dataset this could be a game changer. However, this single result in non-standard setting is likely to rise doubts. I would also appreciate an analysis of the different step of the algorithm: \\n- is the reference feature selection step important (e.g. you could compare with other feature selection strategies like information gain) ?\\n- are the individual RDE good predictors of their reference features ? is that important or it does not matter for the final task?\\n- what is the impact of k on the generalization performance?\\n\\nAlso, the baseline algorithm SVM and logistic regression are not trained on the full dataset (this is easily feasible with an efficient SGD implementation like http://leon.bottou.org/projects/sgd). Moreover, I do not understand why no learning curve is reported for semi-supervised RDEs (only the point at 5K labeled samples is reported). Finally, you mention that the hyperparameters of the baselines have been tuned on the test set, which is usually discouraged as it could a very optimistic estimate of the generalization performance. It also raises the question of the methodology used for tuning of k, t and the regularization parameter 'ridge' for your technique. Could you detail your procedure and provide more detail on hyper-parameter sensitivity?\"}", "{\"review\": \"Thank you for your comments.\\n\\nSome of your suggestions are what we are currently working on, e.g., the theoretical analysis about supervised/semi-supervised RDE, the relation with other classifiers and much more experiments on various challenge evaluation data sets. I must say it is a very promising direction technically no matter people like or not the description of the manuscript. \\n\\nHere we just aim to report the preliminary findings in theory and experiment. Since there is page limitation in this conference, we are not able to include a lot of details in this one.\"}", "{\"review\": \"To Anonymous 7977:\\n\\n- '..just selects features are (on average) as correlated to other features as possible...'\\n\\n1) You could have missed the I(r) in the formula, an important measure of the discriminating ability of reference feature, which is also obtained from the theory. \\n\\n2) When you look at the Step 2 of Section 2.2, you will know it is not a feature selection method, because each RDE (Formula 1) incorporates new information (beyond labeled data) from unlabeled data via P(r|j) and P(r). In other words, this method learns new features from the co-occurrence of existing features in huge unlabeled data. \\n\\n3) For example, we have the following Boolean features: f1, f2, f3, f4, f5... In the instance xi, the features f1, f2, and f3 appear. We select f4 and f5 as reference features. According to the first two steps of Section 2.2, the new representation for xi has two features. The feature values are RDE(xi, f4) and RDE(xi, f5), which are not the simple weighted combination of the old features in labeled data. For example, if f3, a highly indicative feature, does not appear in labeled data but in unlabeled data, it cannot be utilized in supervised learning method but can be learnt from RDE (xi, f4) if f4 is a good reference feature. Also it will be the similar case if f2 is an extremely low-frequency in labeled data. It provides a powerful strategy to overcome the data sparseness problem in machine learning, and also makes it feasible to design high-order features from the combination of existing features, e.g., n-gram features (Figure 1b). \\n\\n\\n-\\t'There is little motivation on using a reference feature --- which is not available in practice'\\n\\nCan you prove that it is not available in practice? Actually, the assumptions in Theorem 1 are much weaker than those in Naive Bayes and Co-training. Also, most machine learning method aims to find a prediction label as close as the gold standard label. RDE aims to find a good reference feature which is not necessarily the gold standard. In this sense, it should be more practical. \\n\\n\\n-\\t\\u201cMoreover, how Theorem 3's condition is generally true?\\u201d \\n\\nIt is a much weaker assumption than that in Theorem 1, and found generally true in the text classification experiments. Due to the page limitation, I am not able to give a detailed analysis in this paper. Theoretical study needs to make assumptions at first even if there could be a gap with the practice such as the IID assumption and conditional independence assumption. At least they can inspire people to move towards the correct direction. \\n\\n Looking forward to your response.\"}", "{\"title\": \"review of Reference Distance Estimator\", \"review\": \"The paper proposes a feature selection algorithm and learn a new representation based on the selected features. The new representation is then used for constructing classifiers.\\n\\nThis is a very difficult paper to read, due to the exposition style. As far as I can gather, the feature selection algorithm just selects features are (on average) as correlated to other features as possible, cf. step 1 in section 2.2. The new representation is then weighted combination of the old features. \\n\\nI have difficulty in understanding either contributions. There is little motivation on using a reference feature --- which is not available in practice. Moreover, how Theorem 3's condition is generally true?\"}", "{\"reply\": \"Thanks for your comments. Your suggestions are helpful.\", \"but_i_would_like_to_discuss_with_you_about_the_following_comments\": \"- ' it is hard to understand why it should perform better than state of the art linear methods. The proposed heuristic could be interpreted as a prior on classifier weights, but again in the context of large training sets like the ones used in the experiments, this should not be a real advantage'\\n\\n1. Compared to supervised learning methods, the proposed method can be interpreted as feature weight learning for linear classifier only in some special cases where a) the feature vocabularies of training and test/unlabeled data are the same, and b) the classifier for the learned RDE features is a linear classifier.\\n\\nHowever, in practice there are a number of tasks that have 'out-of-vocabulary' (OOV) features, such as the word/n-gram/phrase based features in IR and NLP, sparse visual word features obtained by clustering or pooling in image representation, or other sparse signal features learned from universal data... That is, there must be a lot of features in the unlabeled data but not in the training data, since they usually follow a power law distribution. These ignored features cannot be utilized well in supervised learning but can be activated in the RDE based semi-supervised learning. See the following example:\", \"we_have_the_following_boolean_features\": \"f1, f2, f3, f4, f5... In the instance xi, the features f1, f2, and f3 appear. We select f4 and f5 as reference features. According to the first two steps of Section 2.2, the new representation for xi has two features. The feature values are RDE(xi, f4) and RDE(xi, f5), which are not the simple weighted combination of the old features in labeled data. For example, if f3, a highly indicative feature, does not appear in labeled data but in unlabeled data, it cannot be utilized in supervised learning method but can be learnt from RDE (xi, f4) if f4 is a good reference feature. Also it will be the similar case if f2 is an extremely low-frequency in labeled data. It provides a powerful strategy to overcome the data sparseness problem in machine learning, and also makes it feasible to design high-order features from the combination of existing features, e.g., n-gram features (Figure 1b).\\n\\n2. Even if in some cases it is \\u201ca linear combination of the initial data features\\u201d or feature reweighting method, is it \\u201chard to understand why it should perform better than state of the art linear methods\\u201d? It is easy to understand that a Na\\u00efve Bayes classifier trained with 13 million labeled examples could be better than a linear SVM trained with 5000 labeled ones. The Na\\u00efve Bayes classifier can be seen as feature reweighting of the SVM classifier (where the OOV features can be viewed as zero weights), which improves the performance greatly (Figure 1a and 1b). The reason is the law of large number. RDE based semi-supervised learning just aims to move towards this goal (Figure 1 and Table 1). \\n\\n3. \\u201c\\u2026in the context of large training sets like the ones used in the experiments, this should not be a real advantage\\u2026\\u201d \\nIt is against all the semi-supervised learning methods but not just for the RDE based method. My opinion is that \\u201clarge\\u201d or \\u201csmall\\u201d is just a relative concept, so if you have large labeled data there tends to be much larger unlabeled data (waiting to be labeled). Otherwise, the task is not very suitable for machine learning since most of the work has already been done by humans, and manual annotation of the rest seems not too difficult. Also, the \\u201clarge\\u201d or \\u201csmall\\u201d data depends on features. For example, for word features 10 million documents should be a large set, since most words appears at least several times. But for high order n-grams it appears to be a small set and we need to learn n-gram features from much more unlabeled data. In this experiment if we do not use the huge labeled MEDLINE dataset, we cannot get a good estimation of semi-perfect classifier, which is important to justify the theorems. \\n\\n\\n-\\t\\u201cThe development of these features is neither intuitive nor theoretically founded.\\u201d\\n\\nIt is well known that the performance of individual features is a very important factor to determine the quality of a set of features. All the theorems and the experiment (Figure2) are concerned with the performance of individual RDEs. The ensemble algorithm is to select the best RDEs as individual features and combine them together for better performance. Although we could ignore the diversity of the RDE features (our future study), the development of these features are supported by strong intuition and at least partially by theory and experiments.\"}", "{\"title\": \"review of Reference Distance Estimator\", \"review\": \"The paper describes a heuristic algorithm for learning representation features from labeled and unlabeled data. These features are then used as inputs to a classifier trained on labeled data. The main motivation of the algorithm is semi-supervised learning when unlabeled data can be used for estimating and selecting the representation features. Properties of these new features are analyzed under different hypothesis and in particular, a bound is provided for selecting the new features among a set of candidates. Experiments are then performed on a large dataset of Medline abstract and the proposed approach is compared to supervised and semi-supervised classifiers.\\nThe paper introduces new ideas for learning intermediate representations. They appear to be quite effective in the experimental comparison and the proposed method performs better than alternative semi-supervised baselines. The author also develops a series of properties of the learned features. On the other hand, the paper presents different weaknesses. The development of these features is neither intuitive nor theoretically founded. The final classifier performs a linear combination of the initial data features, and it is hard to understand why it should perform better than state of the art linear methods. The proposed heuristic could be interpreted as a prior on classifier weights, but again in the context of large training sets like the ones used in the experiments, this should not be a real advantage. Some of the experimental results are surprising. It is said that logistic regression and linear SVMs cannot be trained on large amounts of data and the author does not present results with these classifiers beyond 200 k training data. This should be reconsidered. Results of semi-supervised classifiers are also doubtful. The baselines used for comparison are not able to leverage the use of unlabeled data when usually for text classification, semi-supervised learning is quite effective.\\nThe form of the paper should also be improved. In particular the experimental illustration of the distance and bound statistics is not clearly explained.\\nGlobally, there are interesting ideas in this method, but also conceptual and experimental weaknesses that should be improved upon.\"}" ] }
mQPhQwYHsGQ31
Learned versus Hand-Designed Feature Representations for 3d Agglomeration
[ "John A. Bogovic", "Gary B. Huang", "Viren Jain" ]
For image recognition and labeling tasks, recent results suggest that machine learning methods that rely on manually specified feature representations may be outperformed by methods that automatically derive feature representations based on the data. Yet for problems that involve analysis of 3d objects, such as mesh segmentation, shape retrieval, or neuron fragment agglomeration, there remains a strong reliance on hand-designed feature descriptors. In this paper, we evaluate a large set of hand-designed 3d feature descriptors alongside features learned from the raw data using both end-to-end and unsupervised learning techniques, in the context of agglomeration of 3d neuron fragments. By combining unsupervised learning techniques with a novel dynamic pooling scheme, we show how pure learning-based methods are for the first time competitive with hand-designed 3d shape descriptors. We investigate data augmentation strategies for dramatically increasing the size of the training set, and show how combining both learned and hand-designed features leads to the highest accuracy.
[ "feature representations", "agglomeration", "methods", "feature descriptors", "features", "image recognition", "labeling tasks", "recent results", "machine", "derive feature representations" ]
submitted, no decision
https://openreview.net/pdf?id=mQPhQwYHsGQ31
https://openreview.net/forum?id=mQPhQwYHsGQ31
ICLR.cc/2014/conference
2014
{ "note_id": [ "ly7JyuzPfvlLO", "bao4aztRuB_OE", "ZKhkZLXxPpZQr", "ggu9yksk4PhqM", "aa9nae8inUo3d", "9k6RkJPpnVgOB" ], "note_type": [ "comment", "review", "review", "comment", "review", "review" ], "note_created": [ 1392766320000, 1392766440000, 1391833560000, 1392766260000, 1391830260000, 1391479560000 ], "note_signatures": [ [ "John Bogovic" ], [ "John Bogovic" ], [ "anonymous reviewer b9e1" ], [ "John Bogovic" ], [ "anonymous reviewer 7c2a" ], [ "anonymous reviewer db5f" ] ], "structured_content_str": [ "{\"reply\": \"Thank you for your comments.\\n\\nTo our knowledge, this work is indeed the first to use representation learning in the context of 3d shape analysis, and we state this at the end of the introduction.\"}", "{\"review\": \"Thank you for the comments and suggestions.\\n\\nWe have avoided using the word 'significant' where inappropriate in our revision.\\n\\nAt this stage we have decided to leave the figures colored as they are, especially since only a subset of accepted papers will be published in the JMLR special topics issue. If this work is among those selected, we will revisit the figure appearance. Thank you once again for your suggestions.\"}", "{\"title\": \"review of Learned versus Hand-Designed Feature Representations for 3d Agglomeration\", \"review\": \"This works presents experiments on different features for the task of classifying agglomeration of 3d neuron fragments. Results of experiments using a wide range hand-crafted features and their combinations, as well as learned features are presented. The results shows that learned features can obtain performance similar to hand-crafted features, and that their combination can yield even better performance.\", \"novelty\": \"I am not very familiar with the field of 3d imaging, so it is difficult for me to asses the novelty of this work. But, I trust the authors when they claim that showing that learn features can perform as well as hand-crafted feature for analysis or classification of 3d shapes is a new result. Other contributions include evaluating performance of a large set of hand-crafted features, proposing an end-to-end approach to derive features, and augmenting the 3d images dataset with transformations.\", \"quality\": \"I found that the paper is well written and well structured. The experimental method is sound and the conclusions are in line with the results.\\n\\nI recommend to accept this paper.\", \"pros\": [\"Experiments are well constructed.\", \"Results are well presented are are relevant to the field of representation learning.\"], \"cons\": [\"The figures are not grayscale friendly, which made it harder to understand some important points. Once I saw the color figures, it made things much clearer.\"], \"small_nit\": \"In section 3.2, the authors use the term 'significantly' when there are no uncertainties on the performance measure. Perhaps using another term would be more appropriate.\"}", "{\"reply\": \"Thank you for your comments.\\n\\nIndeed, the 'boundary map features' perform well on this task, in part due to the domain knowledge introduced in the selection of these features (by us and other researchers working on this task), and the importance of the boundary map in the generation of the superpixels themselves. Another contributing factor could involve the design of the boundary detection and superpixel generation procedure that favors over- vs under-segmentation. The high-performance of the boundary prediction methods results in negative and positive examples being mostly distinguishable from the boundary-prediction alone.\\n\\nWe hypothesize that many of the object/image features would perform relatively poorly in the absence of the boundary map features. In practice, other work in this area (Andres et al. and Nunez-Iglesias et al., for example) used boundary map-like features. These two factors led us to explore performance gains that could be achieved with additional features, rather than performance using those features in isolation.\\n\\nWhile your concerns regarding the domain specific nature of this work are well taken, we note that 3d shape analysis in general is a broad and varied field including tasks such as mesh segmentation, shape retrieval, etc. As a result, we expect that the methods could impact these applications as well, though verification will be needed of course.\\n\\nWe appreciate the difficulties in understanding the nature of the task. We considered including summaries of the results of similar work on 3d agglomeration, but fear that these could result in confusion, as other methods differ in their heuristics, model design, and optimization methods. As our goal was specifically to compare hand-designed to learned features, we felt that these additional comparisons could cloud the main point to readers.\"}", "{\"title\": \"review of Learned versus Hand-Designed Feature Representations for 3d Agglomeration\", \"review\": \"This paper compares the use of hand-designed and learned features for the analysis of 3d objects. The authors use an extensive set of 25 hand-designed features to obtain 92.33% accuracy for the task of agglomerating neuron fragments. They then explored fully supervised end to end learning of features from raw inputs, but only obtain 85.54% accuracy. However, because the data is small compared to its dimensionality, unsupervised learning provides some improvement. Finally, a dynamic pooling method allows them to match the hand-designed features score. Data augmentation however brings fully supervised and unsupervised approaches on par.\", \"novelty_and_quality\": \"I am not very familiar with the literature in 3d analysis but the introduction suggests that feature learning (fully supervised and unsupervised) has not been used before or is not common. If so, the methods themselves are not novel but their application to this particular field is. It would be good if the authors could state clearly if this has ever been tried before or not. The quality of the work is good and thorough.\", \"pros\": [\"directly comparing hand-designed and learned features is good.\", \"learned features are shown to be slightly superior in accuracy.\"]}", "{\"title\": \"review of Learned versus Hand-Designed Feature Representations for 3d Agglomeration\", \"review\": \"Very interesting paper on different kinds of feature representations for agglomeration of 3D neuron fragments. The paper demonstrates how the performance using a set of hand designed features can be further improved with representations derived using both end-to-end and unsupervised training techniques on this task. Several interesting machine learning techniques for feature discovery and training, adopted in other fields have been successfully deployed for this task.\\n\\n- Well written paper with solid results evaluating different novel features and their combinations, for agglomeration of 3D neuron fragments.\\n - Several powerful techniques - end-to-end learning/unsupervised training/dynamic pooling, have been successfully used for this task.\\n\\nThe paper is quite domain specific although novel in its domain. It does not seem to prescribe any new take-home techniques for other tasks/domains.\\n\\nThe features and machine learning techniques introduced in the paper produce remarkably high performance results. It would be useful to have more insights on the nature of the task and how challenging it is in general. The proposed 'boundary map features' are capable of achieving 90% AUC for the task (Table 1) - Why is that? - Is this something specific to the task? It would be useful to understand how other features (object/image based features) perform by themselves (Table 1) - the results presented for these features are after combination only. Do they perform only in combination or do they also have a base performance of 90% by themselves?\\n\\nThe paper shows very clear benefits of using features from unsupervised learning techniques. What kind of attributes are these features learning? Are they similar to the boundary map features and hence the high performance as well? More insights would be useful.\"}" ] }
OP4ePyQXNu-da
On Fast Dropout and its Applicability to Recurrent Networks
[ "Justin Bayer", "Christian Osendorfer", "Sebastian Urban", "Nutan Chen", "Daniela Korhammer", "Patrick van der Smagt" ]
Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients. The control of overfitting has seen considerably less attention. This paper contributes to that by analyzing fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective. We show that fast dropout implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss. One consequence of this is the absense of a global weight attractor, which is particularly appealing for RNNs, since the dynamics are not biased towards a certain regime. We positively test the hypothesis that this improves the performance of RNNs on four musical data sets and a natural language processing (NLP) task, on which we achieve state of the art results.
[ "rnns", "fast dropout", "applicability", "state", "networks", "rich models", "processing", "sequential data", "recent work" ]
submitted, no decision
https://openreview.net/pdf?id=OP4ePyQXNu-da
https://openreview.net/forum?id=OP4ePyQXNu-da
ICLR.cc/2014/conference
2014
{ "note_id": [ "yE_Ay6FimTX09", "22-aeRonaO2n8", "_Ifoh30XojiqN", "N1E31iPUSc7Um", "DDB3rJ8n3-9zV", "RlNmIbKrEuIrj", "mATGdlDSLbdSv", "gX7L54shLjXYb", "PPMEr2u9DTn9i", "LL10tMhg0dY_p", "sDhns359fRsUF", "OOmozKPb_SzGw" ], "note_type": [ "review", "review", "review", "review", "review", "comment", "comment", "review", "comment", "review", "review", "review" ], "note_created": [ 1392130500000, 1391844060000, 1392130560000, 1393875240000, 1395184620000, 1392130560000, 1392130620000, 1391902260000, 1392297000000, 1392563640000, 1388779560000, 1392219900000 ], "note_signatures": [ [ "Justin Bayer" ], [ "anonymous reviewer b50c" ], [ "Justin Bayer" ], [ "Justin Bayer" ], [ "Nicolas Boulanger-Lewandowski" ], [ "Justin Bayer" ], [ "Justin Bayer" ], [ "anonymous reviewer fe58" ], [ "Justin Bayer" ], [ "Justin Bayer" ], [ "Justin Bayer" ], [ "anonymous reviewer fc8d" ] ], "structured_content_str": [ "{\"review\": \"Dear reviewer b50c,\\n\\nWe will address your points one by one.\\n\\n> This paper rederives the fast dropout method which is a deterministic alternative to the originally proposed stochastic dropout regularization for neural networks. Then the authors apply fast dropout to recurrent neural networks (RNN) on two different datasets.\\n\\nThe derivation is actually a minor part of the paper (about a single page out of nine) which is necessary to prepare common ground for the understanding of section 2.2.2, which is novel.\\n\\n> The main focus of the paper is the derivation of the fast dropout training with almost no link to RNNs until the experiment sections, which makes the paper consists of two disconnected components.\\n\\nWhile it is true that the analysis (Section 2.2.2.) is more general than RNNs, the consequence of it is particularly important for RNNs: the regulariser does not bias the dynamics. This is mentioned in the abstract, main text and partially in the conclusion, but probably not prominently enough.\\n\\n> The authors could have presented experiments on feedforward NN without affecting the smoothness of the paper at all.\\n\\nWe did not include any experiments with standard feedforward MLPs since this has been done extensively in [1]; we do not feel that there is something that we could do better and do not see any novel hypothesis for MLPs we could test.\\n\\n> The results are expected. Dropout, on small datasets, does better than baseline systems which doesn\\u2019t use any stochastic regularization and is as good as ones that does (Graves 2013). \\n\\nWithout the analysis one cannot know about a bias of dynamics. Indeed, recent work [4] has shown that non-fast dropout might bias the dynamics due to its relation to L2 regularization. Thus we believe that the results are far form expected.\\nFD-RNNs improve upon the state of the art on Penn Treebank character prediction set in [5], but the result table probably needs to be clearer.\\n\\n> In table 2, it is mentioned that assumptions made in RNN-NADE system is also applicable to FD. The authors are encouraged to add these assumptions to get better results.\\n\\nApplying FD to RNN-NADE is certainly interesting by itself, but well beyond the scope of this paper.\\n\\n> The results sections (3.1.2 and 3.2.2) still need more work. There are no analysis or discussion about the achieved results. Analysis of the effects of network structure, dropout rate, and other hyper-parameters are missing. The tables' order is reversed (table 3 is referenced first then table 2 and table 1).\\n\\nWe will add the actual hyper parameters obtaining the results to the appendix of an upcoming version. We will also add a few sentences on characteristics of the estimated parameters, such as the Eigenvectors of the transition matrix.\\n\\nMore specifically, we will show empirically that the transition matrices found during optimisation inhibit rather big Eigenvalues (>10) which is unlikely when classic regularisers such as weight decay are used.\\n\\n> For the results in 3.1.2, please clarify if there is a test data on which you measure NLL or you measure performance on the training data only (there is no training/test division menitoned for the music datasets).\\n\\nThe results are obtained on the same splits as in the previous works by [2, 3]. We mention the use of test data for evaluation twice (in the table caption and the beginning of the section) but the information is probably not well placed. We will update the experimental section to be more clear.\\n\\n\\nThanks for your efforts; we will address the issues in an upcoming version later this week.\\n\\n\\n\\n[1] Wang, Sida, and Christopher Manning. 'Fast dropout training.' Proceedings of the 30th International Conference on Machine Learning (ICML-13). 2013.\\n[2] Bengio, Yoshua, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. 'Advances in optimizing recurrent networks.' arXiv preprint arXiv:1212.0901(2012).\\n[3] Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. 'Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription.' arXiv preprint arXiv:1206.6392 (2012).\\n[4] Wager, Stefan, Sida Wang, and Percy Liang. 'Dropout training as adaptive regularization.' Advances in Neural Information Processing Systems. 2013.\\n[5] Graves, Alex. 'Generating sequences with recurrent neural networks.' arXiv preprint arXiv:1308.0850 (2013).\"}", "{\"title\": \"review of On Fast Dropout and its Applicability to Recurrent Networks\", \"review\": \"This paper rederives the fast dropout method which is a deterministic alternative to the originally proposed stochastic dropout regularization for neural networks. Then the authors apply fast dropout to recurrent neural networks (RNN) on two different datasets.\\n\\nThe main focus of the paper is the derivation of the fast dropout training with almost no link to RNNs until the experiment sections, which makes the paper consists of two disconnected components. The authors could have presented experiments on feedforward NN without affecting the smoothness of the paper at all.\\n\\nThe results are expected. Dropout, on small datasets, does better than baseline systems which doesn\\u2019t use any stochastic regularization and is as good as ones that does (Graves 2013). In table 2, it is mentioned that assumptions made in RNN-NADE system is also applicable to FD. The authors are encouraged to add these assumptions to get better results.\\n\\nThe results sections (3.1.2 and 3.2.2) still need more work. There are no analysis or discussion about the achieved results. Analysis of the effects of network structure, dropout rate, and other hyper-parameters are missing. The tables' order is reversed (table 3 is referenced first then table 2 and table 1).\\n\\nFor the results in 3.1.2, please clarify if there is a test data on which you measure NLL or you measure performance on the training data only (there is no training/test division menitoned for the music datasets).\"}", "{\"review\": \"Sorry, the reply above was intended for the other review.\"}", "{\"review\": \"We regret to inform you that we made mistakes in the experimental procedure regarding the Penn Treebank corpus. Due to these, our submitted results on that benchmark are *invalid*.\\n\\nThe experimental results on the midi tasks are not affected by this. We will shortly submit a version to arxiv from which the relevant sections are removed. Sadly, due to the size of the benchmark we are not be able to rerun the experiments within a reasonable time to be considered for the final decision about the paper.\\n\\nTo allow you to consider the case, we decided to inform you as soon as possible about this issue. We hope that the chairs and reviewers will still consider our work for the conference. At the same time, however, we realize that this issue alone may lead to a rejection of the paper.\"}", "{\"review\": \"Overall interesting paper. I just have some reservation about your claim that:\\n\\n'[...] RNN-NADE (Boulanger-Lewandowski et al., 2013), makes speci\\ufb01c assumptions about the data (i.e. binary observables). RNNs do not attach any assumptions to the inputs.'\\n\\nThe RNN-RBM/RNN-NADE actually function equally well with real-valued inputs and outputs by using Gaussian RBMs or RNADE, as was done with motion capture data in the original RNN-RBM paper (ICML 2012). In fact, your paper assumes binary variables following the Bernoulli distribution (first equation of section 3.1.1) which, if anything, makes stronger assumptions about the data.\"}", "{\"reply\": \"Dear reviewer b50c,\\n\\nWe will address your points one by one.\\n\\n> This paper rederives the fast dropout method which is a deterministic alternative to the originally proposed stochastic dropout regularization for neural networks. Then the authors apply fast dropout to recurrent neural networks (RNN) on two different datasets.\\n\\nThe derivation is actually a minor part of the paper (about a single page out of nine) which is necessary to prepare common ground for the understanding of section 2.2.2, which is novel.\\n\\n> The main focus of the paper is the derivation of the fast dropout training with almost no link to RNNs until the experiment sections, which makes the paper consists of two disconnected components.\\n\\nWhile it is true that the analysis (Section 2.2.2.) is more general than RNNs, the consequence of it is particularly important for RNNs: the regulariser does not bias the dynamics. This is mentioned in the abstract, main text and partially in the conclusion, but probably not prominently enough.\\n\\n> The authors could have presented experiments on feedforward NN without affecting the smoothness of the paper at all.\\n\\nWe did not include any experiments with standard feedforward MLPs since this has been done extensively in [1]; we do not feel that there is something that we could do better and do not see any novel hypothesis for MLPs we could test.\\n\\n> The results are expected. Dropout, on small datasets, does better than baseline systems which doesn\\u2019t use any stochastic regularization and is as good as ones that does (Graves 2013). \\n\\nWithout the analysis one cannot know about a bias of dynamics. Indeed, recent work [4] has shown that non-fast dropout might bias the dynamics due to its relation to L2 regularization. Thus we believe that the results are far form expected.\\nFD-RNNs improve upon the state of the art on Penn Treebank character prediction set in [5], but the result table probably needs to be clearer.\\n\\n> In table 2, it is mentioned that assumptions made in RNN-NADE system is also applicable to FD. The authors are encouraged to add these assumptions to get better results.\\n\\nApplying FD to RNN-NADE is certainly interesting by itself, but well beyond the scope of this paper.\\n\\n> The results sections (3.1.2 and 3.2.2) still need more work. There are no analysis or discussion about the achieved results. Analysis of the effects of network structure, dropout rate, and other hyper-parameters are missing. The tables' order is reversed (table 3 is referenced first then table 2 and table 1).\\n\\nWe will add the actual hyper parameters obtaining the results to the appendix of an upcoming version. We will also add a few sentences on characteristics of the estimated parameters, such as the Eigenvectors of the transition matrix.\\n\\nMore specifically, we will show empirically that the transition matrices found during optimisation inhibit rather big Eigenvalues (>10) which is unlikely when classic regularisers such as weight decay are used.\\n\\n> For the results in 3.1.2, please clarify if there is a test data on which you measure NLL or you measure performance on the training data only (there is no training/test division menitoned for the music datasets).\\n\\nThe results are obtained on the same splits as in the previous works by [2, 3]. We mention the use of test data for evaluation twice (in the table caption and the beginning of the section) but the information is probably not well placed. We will update the experimental section to be more clear.\\n\\n\\nThanks for your efforts; we will address the issues in an upcoming version later this week.\\n\\n\\n\\n[1] Wang, Sida, and Christopher Manning. 'Fast dropout training.' Proceedings of the 30th International Conference on Machine Learning (ICML-13). 2013.\\n[2] Bengio, Yoshua, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. 'Advances in optimizing recurrent networks.' arXiv preprint arXiv:1212.0901(2012).\\n[3] Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. 'Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription.' arXiv preprint arXiv:1206.6392 (2012).\\n[4] Wager, Stefan, Sida Wang, and Percy Liang. 'Dropout training as adaptive regularization.' Advances in Neural Information Processing Systems. 2013.\\n[5] Graves, Alex. 'Generating sequences with recurrent neural networks.' arXiv preprint arXiv:1308.0850 (2013).\"}", "{\"reply\": \"Hello Reviewer fe58,\\n\\nThank you for your comments. We will reply to certain points raised below.\\n\\n> Right now, the analysis says, it all boils down to delta_a. But how can we relate delta_a to situations that occur in practice? I.e., can you say, delta_a is positive in situtaion a, and negative in situations b? Perhaps the analysis already allows for such conclusions, but I don't see them, and so it would be nice to inculde something like that.\\n\\nWe only gave an informal notion in the bulleted list in section 2.2.2, which we will illustrate with qualitative plots in an upcoming version later this week.\\nFurther analysis of delta_a requires assumptions about the network architecture (outgoing units, transfer functions, loss function), which we believe to be beyond the scope of this work--we already are beyond the soft page limit and do not know what to cut instead.\\n\\n> Also, it is worth stating that a global regularizer may not exist, and the contribution of dropout to the update may not be the derivative of any gradient.\\n\\nCan you expand on what you have in mind?\\n\\n\\nAlso, thanks for the typos. The second typo actually was not only a typo but an error; luckily, the consequences of that are minor and the results of that section still stand. We will correct this soon.\\n\\n\\nThanks for your comments.\"}", "{\"title\": \"review of On Fast Dropout and its Applicability to Recurrent Networks\", \"review\": \"The paper applies the fast dropout of Wang and Manning to RNNs, obtaining excellent results\\non two standard RNN datasets. This work also provieds an interesting analysis of fast droupt,\\npresenting it as an explicit regularization.\", \"pros\": \"The results are good. The paper convincingly shows that fast dropout is effective\\nfor training RNNs. I like the equation E[dJ/da * da/dw_i] = dJ/dE[a] * dE[a]/dw_i + dJ/dV[a] * dV[a]/dw_i.\\nIt helps reason about stochastic systems such as this.\", \"cons\": \"The novelty lies almost entirely in the new analysis of dropout. I also wish the analysis could tell\\nus more about dropout. The analysis establishes that the variance may increase or decrease in response to delta_a.\\nBut what is it? That's the most interesting part. We need to know what happens to delta_a in order to be able\\nto say nontrivial things about dropout. Right now, the analysis says, it all boils down to delta_a. But how \\ncan we relate delta_a to situations that occur in practice? I.e., can you say, delta_a is positive in situtaion a,\\nand negative in situations b? Perhaps the analysis already allows for such conclusions, but I don't\\nsee them, and so it would be nice to inculde something like that. \\n\\nAlso, it is worth stating that a global regularizer may not exist, and the contribution of dropout to the update\\nmay not be the derivative of any gradient.\", \"there_are_a_few_typos\": \"Page 6, sampling: hat a = E[a] + s * sqrt{V[a]}, and not E[a] + s * V[a] as written in the text. I worry\\nthat this may have caused an error in the remanider of the sampling section, but I am not sure because I didn't\\nunderstand it fully.\\n\\nLikewies, Eq. 1 should be E[a], not V[a].\"}", "{\"reply\": \"Good day reviewer fc8d,\\n\\nWe agree that our work is not groundbreakingly novel. It is just the transfer of a regulariser to a different model; the justification, however, is new to the best of our knowledge.\\n\\nWe did not experiment with AWN-RNNs since to the best of our knowledge these models are rather hard to train. E.g. you need to pretrain with non-AWN, then apply AWN, and use the MAP solution subsequently for optimal results (compare [1]). We fear that not being experts with this method due to the lack of practical experience might lead to results that are not representative and would not be fair for AWN-RNN.\\n\\nWe stand corrected--AWN is not an MCMC method. We will fix this in the next iteration of the work.\\n\\nThank you for the typos, we will correct them.\\n\\n\\nBest,\\n-Justin\\n\\n\\n[1] Graves, Alex. 'Generating sequences with recurrent neural networks.' arXiv preprint arXiv:1308.0850 (2013).\"}", "{\"review\": [\"We have just uploaded a new version of the paper to arxiv, which will be available at Tue, 18 Feb 2014 01:00:00 GMT.\", \"We address various concerns of the reviewers. Here is a break down of the improvements:\", \"added figures and text showing the behaviour of fast dropout units (especially delta_k) in various situations,\", \"added an analysis of the evolution of the recurrent weight matrix during training, which supports our theoretical findings,\", \"calculation in section 2.2.2 'sampling' corrected,\", \"added hyper parameters used for the experiments in the appendix,\", \"included results from [1] for completeness,\", \"numerous typos and errors fixed.\", \"We want to thank the anonymous reviewers, since we believe their comments to have improved the work.\", \"[1] Pascanu, Razvan, et al. 'How to Construct Deep Recurrent Neural Networks.'arXiv preprint arXiv:1312.6026 (2013).\", \"APA\"]}", "{\"review\": \"We have added a new version recently.\\n\\nThis version reports new experimental results from FD-RNNs on the music data sets where the dropout rates were chosen by crossvalidation. This leads to improved results.\"}", "{\"title\": \"review of On Fast Dropout and its Applicability to Recurrent Networks\", \"review\": \"This paper is a rather straightforward extension of fast dropout to standard RNNs. Maybe too straightforward, it is not very groundbreakingly novel. However, I like the results, and the application of the method to RNNs is useful to know about for other researchers. The authors remark that a comparison for standard RNNs (rather than LSTMs) with adaptive weight noise is lacking -- why have the authors not tried this comparison I wonder, as it seems highly appropriate in this context and doable in their setup.\\nThere are a few points in the text that should be improved. For example, calling Graves' adaptive weight noise method MCMC is questionable in my view. Also, there are quite a few typos:\", \"abstract\": \"absence -> absence\", \"first_paragraph\": \"remove 'so-called'\", \"second_paragraph\": \"Contrary -> In contrast\", \"last_par_of_sec_1\": \"reason -> discuss\\nfirst par sec 2.2.2: functionc ->function.\\nsec 2.2.2: 'an subsequently' -> 'and subsequently'\\npage six, line 11: 'were *** is defined as' -> 'where *** is defined as'\"}" ] }
DnsBnbl6TQD6t
Learning High-level Image Representation for Image Retrieval via Multi-Task DNN using Clickthrough Data
[ "Wei-Ying Ma", "Tiejun Zhao", "Kuiyuan Yang", "Wei Yu", "Yalong Bai" ]
Image retrieval refers to finding relevant images from an image database for a query, which is considered difficult for the gap between low-level representation of images and high-level representation of queries. Recently further developed Deep Neural Network sheds light on automatically learning high-level image representation from raw pixels. In this paper, we proposed a multi-task DNN for image retrieval, which contains two parts, i.e., query-sharing layers for image representation computation and query-specific layers for relevance estimation. The weights of multi-task DNN are learned on clickthrough data by Ring Training. Experimental results on both simulated and real dataset show the effectiveness of the proposed method.
[ "dnn", "image representation", "image retrieval", "clickthrough data", "representation", "layers", "relevant images", "image database", "query" ]
submitted, no decision
https://openreview.net/pdf?id=DnsBnbl6TQD6t
https://openreview.net/forum?id=DnsBnbl6TQD6t
ICLR.cc/2014/conference
2014
{ "note_id": [ "9M6x9jeOhSCO_", "ybOCyBR3ijyWz", "O9mROjhpZ_Lpw", "pp6Q9YYzIx9Tn", "x1YC1qLv58Zvi", "3FowFUkEs_3Xh" ], "note_type": [ "review", "review", "comment", "comment", "comment", "review" ], "note_created": [ 1391806080000, 1391837220000, 1390993140000, 1390993380000, 1390993140000, 1390861260000 ], "note_signatures": [ [ "anonymous reviewer 2839" ], [ "anonymous reviewer 0514" ], [ "Wyvern Bai" ], [ "Wyvern Bai" ], [ "Wyvern Bai" ], [ "anonymous reviewer e3ba" ] ], "structured_content_str": [ "{\"title\": \"review of Learning High-level Image Representation for Image Retrieval via Multi-Task DNN using Clickthrough Data\", \"review\": \"Pros:\\n\\u2022\\tThe use of multi-task DNNs for clickthrough data makes sense and provides good results\", \"cons\": \"\\u2022\\tThe idea of multi-task DNNs has been explored before in the literature and is not new. For example, in speech in multi-lingual speech processing this has been explored before.\", \"othis_paper_does_joint_training_of_the_shared_and_task_specific_components\": \"K. Vesley et al, \\u201cThe Language-independent bottleneck features,\\u201d in Proc. ICASSP 2012.\", \"othis_paper_also_does_joint_training\": \"Z. Tuske et al, \\u201cInvestigation on Cross-and Multilingual MLP Features under Matched and Mismatched Acoustical Conditions,\\u201d in Proc. ICASSP 2013.\", \"othis_paper_does_separate_training_of_shared_and_task_specific_components\": \"Samuel Thomas, Sriram Ganapathy and Hynek Hermansky, Multilingual MLP Features For Low-resource LVCSR Systems, ICASSP, Kyoto, Japan, March 2012\\n\\u2022\\tDetails of and Ring training are not clearly described. Also, relating this to prior art and justifying why this is the best approach is missing.\\n\\u2022\\tThe paper could be written much better. There are many spelling/grammar mistakes in the paper\", \"here_are_my_comments_per_section\": \"\\u2022\\tSection 1, page 1: expand on SIFT, HOG and LBP\\n\\u2022\\tSection 1, page 1: Your statement on CNNs being good doesn\\u2019t fit with the rest of the sentence\\n\\u2022\\tSection 1, page 2: exiting \\u2192 existing\\n\\u2022\\tSection 2: There needs to be a discussion on Multi-task DNNs relation to prior work and why your idea is novel. For example, see the papers listed above for speech processing.\\n\\u2022\\tSection 2, page 3: What is the loss function that you use. \\n\\u2022\\tSection 3, page 3: I found the discussion of Ring Training very confusing. You should describe the algorithm 1 in more detail in the text. Also, ring training is not the only way to train Multi-task DNNs. Why did you use this approach as opposed to other ideas in the literature, and why is it potentially better?\\n\\u2022\\tSection 4, page 3 CIFRA\\u2192 CIFAR\\n\\u2022\\tSection 5.2, page 6: Why did you only add dropout to the first fully connect layer? Did you tune this optimally? Sometimes it helps to add dropout to multiple layers. You should state if it\\u2019s tuned properly or not.\\n\\u2022\\tSection 5.3 page 6: Provide a reference for DCG\"}", "{\"title\": \"review of Learning High-level Image Representation for Image Retrieval via Multi-Task DNN using Clickthrough Data\", \"review\": \"This paper proposes a multi-task deep neural network approach to solve the task of image retrieval. A dataset of clickthrough data is used to train the network.\\n\\nThe main contribution of this paper is the proposed multi-task DNN method for image retrieval and the ring training. This multi-task approach consists of adding weights specific to each query (basically a supplemental fully-connected layer) before binary classification. The authors compare their approach to binary DNNs and multi-class DNN. However, I would have like to see a multi-label DNN as well for comparison.\", \"quality\": \"I found the quality of the language passable. There were several syntactic, grammatical errors and typos which made the paper hard to understand. The paper could have used more polishing. In general, I have found the paper hard to read, and the authors did not get their points through clearly.\", \"general_comments\": \"What loss function was used? I don't understand why the authors did not give it explicitly.\\n\\nIn the ring training pseudo code, it is not mentioned how j is defined, is there an iteration on all values, or is it chosen randomly? It makes a lot of difference how j is treated if the classes are unbalanced. This should be explained clearly.\\n\\nIn section 4, the architecture of the network is presented, but nothing is mentioned about which weights were query-specific in the multi-task case for the CIFAR-10 experiment.\\n\\nsection 4.2: 'In general, binary DNN performs consistently worse for the severe overfitting problem.' This sentence does not make a lot of sense, and does not reflect the results very well.\\n\\nI don't really see the point of using dataset 2. The difference in results between d1 and d2 are small, and do not, in my opinion, demonstrate what the authors claim it shows. I think it only clutters the paper.\\n\\nThe authors use the term significantly without showing confidence intervals. Perhaps another term should be used.\\n\\nSection 5.1: 'multi-class DNN is infeasible for such large number of queries'. This would require more explanation. Why does the multi-task approach scale better than the multi-class approach? Since the parameters are not shared, doesn't the multi-task DNN require even more parameters than the multi-class? If this is not the case, please explain more clearly your point.\\n\\nIn section 5.3, why not use the activation of the classification layer as an affinity measure instead of training SVMs\"}", "{\"reply\": \"Thank you for your comments.\\n\\nYou are right that the 2-way softmax is similar to sigmoid with 1 unit. Actually we used the activity function like sigmoid and tanh but did not get better result than using 2-way softmax, thus we used 2-way softmax.\\n\\nOur experiments results on CIFAR-10 show that the multi-class DNN can not work well when the data is heavy tail. The experiments results in table1 showed Multi-task DNN + ring training was better than Multi-class DNN signally in dataset_1, which is a heavy tail distributed dataset. Error distributions of Multi-class DNN is shown in figure, it showed that the data distribution can effect predictions of multi-class DNN greatly. Meanwhile the experiments in dataset_2 show that trying to discriminate categories describing the same concept will hurt multi-class DNN, and there are many queries in practice are synonymous.\\n\\nFor mulit-class DNN, the number parameters of last full connect layer with millions outputs for million queries is too large, thus we described in the paper that the multi-class DNN is infeasible for this situation. \\n\\nAbout why the SVM needed, we considering that the bag of word are trained by SVM, to explain the feature learned by our method is better than bag of words we also used SVM to train the ranker.\\n\\nThank you again for your comments.\"}", "{\"reply\": \"Sorry about my mistake. Error distributions of Multi-class DNN is shown in figure 5.\"}", "{\"reply\": \"Thank you for your comments.\\n\\nYou are right that the 2-way softmax is similar to sigmoid with 1 unit. Actually we used the activity function like sigmoid and tanh but did not get better result than using 2-way softmax, thus we used 2-way softmax.\\n\\nOur experiments results on CIFAR-10 show that the multi-class DNN can not work well when the data is heavy tail. The experiments results in table1 showed Multi-task DNN + ring training was better than Multi-class DNN signally in dataset_1, which is a heavy tail distributed dataset. Error distributions of Multi-class DNN is shown in figure, it showed that the data distribution can effect predictions of multi-class DNN greatly. Meanwhile the experiments in dataset_2 show that trying to discriminate categories describing the same concept will hurt multi-class DNN, and there are many queries in practice are synonymous.\\n\\nFor mulit-class DNN, the number parameters of last full connect layer with millions outputs for million queries is too large, thus we described in the paper that the multi-class DNN is infeasible for this situation. \\n\\nAbout why the SVM needed, we considering that the bag of word are trained by SVM, to explain the feature learned by our method is better than bag of words we also used SVM to train the ranker.\\n\\nThank you again for your comments.\"}", "{\"title\": \"review of Learning High-level Image Representation for Image Retrieval via Multi-Task DNN using Clickthrough Data\", \"review\": \"The paper presents an architecture to learn simultaneously multiple image rankers, all of which sharing low-level layers of a deep neural network, while keeping their last layer separate for each ranking task. Since the distribution of images per query is far from uniform, this helps 'poor' queries learning good image representations from 'rich' queries. The query specific model is a 2-way softmax (good or bad image for that query). This is not exactly what I would call a ranker, but rather a classifier. In fact, I didn't see how this architecture differs from a DNN trained with logistic regression (so that each query has one output unit, which should be 1 if the image corresponds to the query, and 0 otherwise); it seems that a softmax with 2 units is similar to a single sigmoid unit, no?\\nThe paper shows two series of experiments, one on CIFAR-10, and one on MSR-Bing Image Retrieval challenge dataset, which is a real image ranking problem.\\nResults show that the proposed approach works better than the two compared approaches (the Binary DNN consists of one DNN per query, which cannot work when the data is heavy tail; the multi-class DNN, on the other hand, seems very similar to the proposed approach, and I did not understand why this one would not scale to millions of queries, while the proposed approach would). Finally, apparently, an SVM is trained after the features are fixed. I did not understand this part neither. Why is the SVM needed anyway?\\nThe paper is not very well written, with a poor level of english, and several details missing, both in describing the algorithm and in the experimental section.\"}" ] }
uuFh8Ny0WPw0B
Some Improvements on Deep Convolutional Neural Network Based Image Classification
[ "Andrew Howard" ]
We investigate multiple techniques to improve upon the current state of the art deep convolutional neural network based image classification pipeline. The techiques include adding more image transformations to training data, adding more transformations to generate additional predictions at test time and using complementary models applied to higher resolution images. This paper summarizes our entry in the Imagenet Large Scale Visual Recognition Challenge 2013. Our system achieved a top 5 classification error rate of 13.55% using no external data which is over a 20% relative improvement on the previous year's winner.
[ "improvements", "image classification", "multiple techniques", "current state", "image classification pipeline", "techiques", "image transformations", "data" ]
submitted, no decision
https://openreview.net/pdf?id=uuFh8Ny0WPw0B
https://openreview.net/forum?id=uuFh8Ny0WPw0B
ICLR.cc/2014/conference
2014
{ "note_id": [ "22dWxQ0r7e3vM", "czlkm7LZMQKwJ", "GGUzJ4SdzZGgQ" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1391468760000, 1391829960000, 1390946400000 ], "note_signatures": [ [ "anonymous reviewer a60d" ], [ "anonymous reviewer 98fd" ], [ "anonymous reviewer e5ff" ] ], "structured_content_str": [ "{\"title\": \"review of Some Improvements on Deep Convolutional Neural Network Based Image Classification\", \"review\": \"This paper goes over data augmentation techniques that give 20% relative improvement over last year's winner of ILSVRC.\\n\\nFirst set of improvements is to augment training data with more kinds of distortions -- 2x zoom, different cropping approach. The second set of improvements is to combine predictions on multiple transformed versions of the image.\\n\\nBecause there are 90 transformed versions of the image, computing all 90 is too expensive, so they do a greedy approach where they combine predictions until there is 'no additional improvement.'\\n\\nThe combination approach is missing important details. How are predictions combined? What does 'no additional improvement' mean in regards to greedy method? This greedy method works on individual examples, but without knowing true labels, what's an 'improvement'? Graph in Figure 3 shows that greedy method slightly outperforms the method which uses 'full 90 predictions', which seems counter-intuitive, but then I can't tell if it's an error because precise description of combining method is missing\\n\\nOverall this paper is slightly interesting because it measures the impact of additional transformations, but hardly novel since these approaches were tried before. Also, it is interesting because it shows that ImageNet is going the way of 'MNIST research', where authors focus on incremental improvements tailored for a specific dataset (in this case -- better cropping) and might not transfer to other datasets\"}", "{\"title\": \"review of Some Improvements on Deep Convolutional Neural Network Based Image Classification\", \"review\": [\"This paper presents a few simple tricks to improve over Krizhevsky's model:\", \"use full image instead of cropped square by Krizhevsky.\", \"jitter in image contrast, brightness and color.\", \"add scales at testing time (+ greedy algorithm to keep it fast).\", \"extra high-resolution model trained on scaled up patches (essentially adding scales at training time).\", \"turning off dropout for the last learning rate decrease.\"], \"novelty\": \"nothing new here, just the application of existing techniques.\", \"pros\": [\"very good result on ILSVRC13 classification.\"], \"cons\": [\"the improvements proposed are not novel but the paper can be made interesting if details are given. For example, instead of just mentioning contrast, brightness and color jitter, explain precisely what operations are taking place here. And what do the .5 and 1.5 values correspond to?\", \"the greedy algorithm is missing some details: 'starts with the best prediction', how is best defined here? highest confidence? out of which views? how many views are initially computed?\", \"the training images for the high-res model differs from run-time, scaling up patches of 128x128 is in my opinion not as good as using the full-resolution image directly like at run-time. Although this approach possibly introduces noise that is beneficial for the case of images smaller than 224x224 which need to be scaled up at test time. And as mentioned, this augments the data considerably so it is a good thing.\"]}", "{\"title\": \"review of Some Improvements on Deep Convolutional Neural Network Based Image Classification\", \"review\": \"This paper details some training tricks such as transformations, combining multiple scales and different views for ILSVRC challenge which performed pretty well compared to last years winner.\\n\\nWhile there is nothing dramatically novel in terms of deep learning model or theory, empirical performance is the invisible hand that guides deep learning and generates buzz. So it is important to publish practical tricks-of-the trade and implementation details that can benefit all researchers.\"}" ] }
Wi9tWlxh4Jwu6
Understanding Deep Architectures using a Recursive Convolutional Network
[ "David Eigen", "Jason Rolfe", "Rob Fergus", "Yann LeCun" ]
Convolutional neural network models have recently been shown to achieve excellent performance on challenging recognition benchmarks. However, like many deep models, there is little guidance on how the architecture of the model should be selected. Important hyper-parameters such as the degree of parameter sharing, number of layers, units per layer, and overall number of parameters must be selected manually through trial-and-error. To address this, we introduce a novel type of recursive neural network that is convolutional in nature. Its similarity to standard convolutional models allows us to tease apart the important architectural factors that influence performance. We find that for a given parameter budget, deeper models are preferred over shallow ones, and models with more parameters are preferred to those with fewer. Surprisingly and perhaps counterintuitively, we find that performance is independent of the number of units, so long as the network depth and number of parameters is held constant. This suggests that, computational efficiency considerations aside, parameter sharing within deep networks may not be so beneficial as previously supposed.
[ "number", "parameters", "deep architectures", "units", "recursive convolutional network", "excellent performance", "recognition benchmarks", "many deep models", "little guidance" ]
submitted, no decision
https://openreview.net/pdf?id=Wi9tWlxh4Jwu6
https://openreview.net/forum?id=Wi9tWlxh4Jwu6
ICLR.cc/2014/conference
2014
{ "note_id": [ "AAtgFjSp6nApN", "ssMZ30LIpVsae", "uH0bHzxAmkHM_", "GF39Fv2SY4F1C", "HV4bj2CweoV81", "FITlI4zylVaPv", "ogQ7gv0KXZgK2", "44Rk6dzZjC6vT", "qCkD2em84Nq8g", "G4hZGVsiaodxI", "-3kx3lZGx190L", "aNPJNi4dvFz61", "Shb3wykEAsS8L" ], "note_type": [ "review", "comment", "comment", "comment", "review", "comment", "review", "review", "review", "review", "review", "comment", "comment" ], "note_created": [ 1391476380000, 1389138840000, 1392877440000, 1392877740000, 1388847600000, 1392877320000, 1391819580000, 1392879600000, 1391843580000, 1393891920000, 1389398100000, 1392877560000, 1392877200000 ], "note_signatures": [ [ "anonymous reviewer 4423" ], [ "David Eigen" ], [ "David Eigen" ], [ "David Eigen" ], [ "David Krueger" ], [ "David Eigen" ], [ "anonymous reviewer 975a" ], [ "David Eigen" ], [ "anonymous reviewer bd74" ], [ "anonymous reviewer 4423" ], [ "David Krueger" ], [ "David Eigen" ], [ "David Eigen" ] ], "structured_content_str": [ "{\"title\": \"review of Understanding Deep Architectures using a Recursive Convolutional Network\", \"review\": \"Summary of contributions: Studies the effect of # of parameters, # of layers, and # of units on convolutional net performance. Uses recurrence to run nets that have e.g. more layers but not more parameters in order to distinguish the effects of these three properties.\", \"novelty\": \"moderate\", \"quality\": \"low\", \"pros\": \"-Nice empirical demonstration that more parameters helps.\", \"cons\": \"-Does not study dropout. I think dropout is really important for this kind of paper, because dropout has a strong effect on the optimal model size. Also, dropout is crucial part of the state of the art system on both CIFAR-10 and SVHN, so it seems out of place for a paper on how to set hyperparameters to good performance out of a neural net to disregard one of the most important techniques for getting good performance.\\n\\t-Insufficient tuning of hyperparameters.\\n\\t-Support for the claims in the abstract seems weak, with many experiments going against the claims\\n\\t-The stated goal is to provide guidance for how to set hyperparameters so that practitioners don\\u2019t have to resort to trial and error. But I don\\u2019t really see anything here that prevents that. For example, Fig 4a shows standard U-shaped curves for the # of layers hyperparameter. The paper says \\u201cadding more layers tends to increase performance\\u201d but this is only true on the left side of the U! The whole point of trial and error is to figure out where the bottom of the U is, and this paper completely ignores that.\\n\\t-The kind of parameter tying considered in this paper is not one that is typically used in practice, at least not for this kind of problem. The conclusions are therefore not all that helpful. i.e., the authors introduce a new form of parameter tying, and then show it isn\\u2019t useful. We don\\u2019t need to publish that conclusion, because no one is using this useless form of parameter tying anyway.\\n\\t-The authors don\\u2019t investigate the effect of the tiling range of tiled convolution, which is a form of control on the degree of parameter sharing that people actually use. It\\u2019d be much more interesting to study this form of parameter sharing. (This paper feels a bit like it started off as a \\u201cnew methods\\u201d paper advocating convolutional recurrence, and then when the new method didn\\u2019t perform well, the authors tried to salvage it as an \\u201cempirical investigation\\u201d paper, but the empirical investigation part isn\\u2019t really targeted at the methods that would be most useful to study)\", \"detailed_comments\": \"1.1 Related work:\\n\\n\\tYou should also mention \\u201cMulti-Prediction Deep Boltzmann Machines\\u201d, Goodfellow et al 2013. This paper uses recurrent networks on the image datasets MNIST and NORB. Like DrSAE, it is discriminative. It may be interpreted as a form of autoencoder, like the methods you mention in the second paragraph.\", \"2_approach\": \"Your approach is definitely not the first to use recurrence and convolution in the same model. It\\u2019s probably worth discussing similarities and differences to Honglak Lee\\u2019s convolutional DBN. He describes performing mean field inference in this model. The mean field computations are essentially forward prop in a convolutional recurrent architecture, but the connectivity is different than in yours, since each update reads from two layers, and some of the weight matrices are constrained to be the transpose of each other rather than being constrained to be equal to each other.\\n\\n\\tIt\\u2019s also probably worth discussing how you handle the boundaries of the image, since this has a strong effect on the performance of a convolutional net. Since you say all the layers have the same size, I\\u2019m assuming you implicitly pad the hidden layer with zeros when running convolution so that the output of the discrete convolution operation has the same size as the input.\\n\\n\\t\\n2.1 Instantiation on CIFAR-10 and SVHN\\n\\n\\tI don\\u2019t know what it means to put the word \\u201csame\\u201d in quotes. I\\u2019m assuming this refers to the zero padding that I described above, but it\\u2019s worth clarifying.\\n\\n2.2\\n\\tI think it\\u2019s fairly disappointing that you don\\u2019t train with dropout.\\n\\tHow did you choose this one fixed learning rate and momentum value? How do you know it doesn\\u2019t bias the results? For example, if you find that deeper models are better, are you really finding that deeper models are better in general, or are you just finding that deeper models are more compatible with this specific learning rate and momentum setting?\\n\\tIt seems especially important to tune the learning rate in this work because varying the amount of parameter sharing implies varying the number of gradient terms that affect each parameter. The speed at which the parameters move is probably much higher for the nets with many recurrent steps than it is for the nets with no recurrence.\\n\\n3.1\\n\\t\\u201cThat we were able to train networks at these large depths is due to the fact that we initialize all W to the identity\\u201d -> it\\u2019s not obvious to me that it should be hard to train convolutional rectifier networks at most of these depths. For example, Google\\u2019s house number transcription paper submitted to this conference at the same time trains a 12 layer mostly convolutional network with no mention of network depth posing a challenge or requiring special initialization. The maxout paper reports difficulty training a 7 layer rectifier net on MNIST, but that was densely connected, not convolutional. Was it only difficult to train the recurrent nets, or also the untied ones? This is important to explain, since if the recurrent nets are significantly harder to optimize, that affects the interpretation of your results.\\n\\tAre the higher layer weights for all of the networks initialized to the identity, or only the ones with tied parameters? Is it literally identity or identity times some scalar? If it\\u2019s literally identity rather than identity times some scalar, it might be too hard for SGD to shrink the initial weights and learn a different more interesting function. Have you tried other initializations that don\\u2019t impose such a strong hand-designed constraint, such as James Martens\\u2019 sparse initialization, where each hidden units gets exactly k randomly chosen non-zero incoming weights? This initialization scheme is also described as making it easier to train deep or recurrent nets, and it seems to me like it doesn\\u2019t trap the recurrent layer as being a fairly useless memory layer that mostly functions to echo its input.\\n\\n\\t\\u201cLikewise, for any given combination of feature maps and layers, the untied model outperforms the tied one, since it has more parameters.\\u201d I don\\u2019t agree with the claim that the untied model performs better because it has more parameters. This would make sense if the tied model was in the underfitting regime. But you have already said in the same paragraph that many of the tied models are in the overfitting regime. If you look at fig 2. there are several points where both the tied and untied model have 0 training error and the tied model has higher validation set error. If the correct story here is overfitting due to too many parameters, then the untied model should do worse. I suspect what\\u2019s going on here is something like the identity initialization being a bad prior, so that you fit the training set in a way that doesn\\u2019t generalize well, or maybe just your choice of a single momentum and learning rate setting for all experiments ended up benefiting the untied model somehow. For example, as I said above, the recurrent nets will generally have larger gradients on each parameter, so maybe the high learning rate makes the recurrent net adapt too much to the first few minibatches it sees. \\n\\n\\tFig 2\\n\\t\\tIn the abstract you say \\u201cfor a given parameter budget, deeper models are preferred over shallow ones.\\u201d It would be nice if on the plot on the left you evaluted points along the parameter budget contour lines instead of points on a grid, since the grid points don\\u2019t always hit the contour lines. As is, it\\u2019s hard to evaluate the claim from the abstract. However, I don\\u2019t see a lot of support for it. The best test error you get is toward the bottom right: 0.160 at the rightmost point in the second row from the bottom. Of course, this is the only point on that parameter budget contour, so it may just be winning because of its cost. However, if I look for the point with the most depth, I see one with 0.240 near the 2^18 contour line. At the bottom right of this contour line, the shallow but wide model gets 0.205.\\n\\t\\tOverall, here is my summary of all your contour lines:\\n\\t\\t\\t2^16: only one point on it\\n\\t\\t\\t2^17: Contradicts claim\\n\\t\\t\\t2^18: Contradicts claim\\n\\t\\t\\t2^19: Contradicts claim\\n\\t\\t\\t2^20: Supports claim (sort of, points aren\\u2019t that close to contour line)\\n\\t\\t\\t2^21: Supports claim (sort of, points aren\\u2019t that close to contour line)\\n\\t\\t\\t2^22: Supports claim (sort of, points aren\\u2019t that close to contour line)\\n\\t\\tSo it seems to me that this plot contradicts the claim from the abstract at least as much as it supports it.\", \"right_figure\": \"This supports the claim in your abstract.\", \"table_1\": \"While it does make sense to compare *these* experiments against methods that don\\u2019t use dropout or data augmentation, I don\\u2019t think it makes sense for these to be your only experiments. I think the case for excluding data augmentation from consideration is getting very weak. There is now a lot of commercial interest in using neural nets on very large datasets. Augmentation of small datasets provides a nice low-cost proxy for exploring this regime.\\nAs far as I know, the main reasons for not considering data augmentation are 1) data augmentation requires knowledge of the data domain, in this case that the input is an image and the output is invariant to shifts in the input. But you are already exploiting exactly that same knowledge by using a convolutional net and spatial pooling. 2) gaining improvements in performance by improving data augmentation techniques distracts attention from improving machine learning methods and focuses it on these more basic engineering tricks. But I\\u2019m not asking you to engineer new data augmentation methods here; you can just use exactly the same augmentation as many previous authors have already used on CIFAR-10 and SVHN.\\nI don\\u2019t think there is any valid case at all for excluding stochastic regularization from consideration. It doesn\\u2019t require any knowledge of the data domain and it is absolutely a machine learning technique rather than just an engineering trick. Moreover, it is computationally very cheap, and state of the art across the board. By refusing to study stochastic regularization you are essentially insisting on studying obsolete methods. The only regime in which stochastic regularization is not universally superior to deterministic backprop is in the extremely large data domain, which as academics you probably don\\u2019t have access to and you are also actively avoiding by not using data augmentation.\", \"fig_2_and_3_in_general\": \"I understand it\\u2019s too expensive to extensively cross-validate every point on these plots, but I think it\\u2019d be good to pick maybe 4 points for each plot (maybe the upper-left and lower-right of two different contour lines) and run around 10 experiments each for those 4 points. Overall that is 80 training runs, which I think is totally reasonable. The current plots are somewhat interesting but it\\u2019s hard to have much confidence that the trends they indicate are real. Obtaining higher confidence estimates of the real value of a small number of points would help a lot to confirm that the trends are actually caused by the # of feature maps and depth rather than compatibility with a fixed optimization scheme.\", \"section_4\": \"I don\\u2019t think the \\u201creceived wisdom\\u201d is that more depth is always better, just that the optimal depth is usually greater than 1.\\n\\tYou say your experiments show that more depth is better for a fixed parameter budget, but doesn\\u2019t Fig 2. (right) contradict this?\"}", "{\"reply\": \"Thanks for your comments.\\n\\nRe. M=1:\\n\\nWe believe the results we report using more common configuration ranges are practical and useful, although it could be interesting to explore whether the trends we observe hold at such extreme cases -- and if not, why, and at what point they break down.\\n\\nKeep in mind that M=1 does not mean that there is only one unit in each hidden layer, but rather 64 (one for each 8x8 spatial location in the feature map). Also, in the example you give (M=32,L=16), an M=1 model would need L=17,379 layers to match the parameter count, not 512. Still, for M=1, there is only a single kernel available for initial feature extraction, so the bottom-most features won't contain multiple edge orientations. It seems likely M=1 would perform poorly in this case, but for other small M this is not clear.\\n\\n\\n1.1:\\n\\nRecurrent/recursive are very much related, the difference being that recurrent nets are fed a new part of the input at each timestep; we will add this disambiguation. For references [24] and [23], the two groups are similar but not identical.\\n\\n\\n3.1:\\n\\n- 'match syntax with the previous sentence'\\ngood suggestion\\n\\n- 'this sentence could use some explaining'\\nWith a zero-centered initialization, it's possible to have vanishing gradients; the identity init avoids this by copying the activations up from the first layer initially (and gradients down from the top layer). Of course, the activations stop being copies of one another as training proceeds. We can add this explanation.\\n\\n- 'training error goes up as well'\\nThanks for the correction, the training error goes down only for the untied case; it indeed goes up a bit for the tied one.\\n\\n\\n3.2.1:\\n\\n- 'not true for 32 features on CIFAR-10'\\nThe M=32,L=4 case here does not fit the overall pattern. The trend is still downwards until L=8, but we can point this out.\\n\\n- Linear regressions for experiments 2 and 3 are a nice suggestion, which we will include. They are consistent with our interpretations, though 7(b) is somewhat weaker than the rest.\\n\\nFigure 6 (a) CIFAR-10 vary P:\", \"slope\": \"0.806380\", \"intercept\": \"0.008227\", \"r_value\": \"0.854079\", \"p_value\": \"0.001657\"}", "{\"reply\": \"Thanks for the clarifications. We believe overfitting is the predominant cause of test performance decreases in the range of network sizes we study; the fact that the train/test spread monotonically increases as layers increase is a good indication of this. However, there very well may be other effects, particularly for much larger numbers of layers like the L=16 case you point out, as we have not yet studied these cases in depth. We have added this to the discussion as well as experiments section.\"}", "{\"reply\": \"Thanks for your comments and critiques. We have responded to your questions here:\\n\\n'if the same conclusion holds when pooling is used'\\n\\nUnpooled convolutional layers are powerful tools in many current models, and our experiments apply directly in characterizing their performance behaviors. While we feel the principles we find are likely to extend to more complex cases as well, we agree that it's unclear how the results might change if pooling were used between layers. We have added this point to the discussion.\\n\\nNote that we assume here that by pooling, one means an operation that explicitly throws away spatial resolution (e.g. an 8x8xM hidden layer may be pooled into a 4x4xM). Aggregating over spatial regions is already done by the convolutions themselves, since they compute weighted averages over 3x3 regions.\\n\\n\\n'Does the network work around tying by dedicating some features to be used most of the time per specific hidden layer but not by the others?'\\n\\nWe looked into this by comparing several activation statistics for corresponding units between layers (e.g. number of nonzeros, mean activation, mean nonzero activation), but did not find much clear evidence of it. If it happens, it plays only a partial role in the network behavior.\\n\\n\\n'Don\\u2019t think table 1 is needed'\\n\\nWe put this in to demonstrate that despite its simplifications, our model still maintains good performance, so studying it is not a large a departure in terms of performance.\\n\\n\\n'Is there something specific you mean by 'after training'?'\\n\\nNo, thanks for this edit; we removed this sentence.\\n\\n\\n'the paper gets repetitive at points'\\n\\nThanks for the feedback; we have tightened up much of the writing in the new version.\"}", "{\"review\": \"I like that this paper attempts to disentangle the effects of different parameters. However, I find the argument that\\nmore parameters should be spent on more layers somewhat unconvincing. Taking this to the extreme suggests that every layer should have only 1 feature map, whereas you only go as low as 32. I don't think overfitting is the only reason a network with M=32,L=16 would (probably) outperform a model with M=1,L=512.\", \"some_specific_comments\": \"1.1:\", \"1st_sentence\": \"you say 'recursive networks' but the refernces you give are for reCURRENT networks. I would explicitly disambiguate the two, as they are easy to confuse.\\n\\nSocher et al [24] and [23] are not the same group or researchers. Saying 'more recently [23], they' implies they are. \\n\\n\\n3.1 'For SVHN, we used between 32 and 256 feature maps and between 1 and 8 layers beyond the \\ufb01rst, also incrementing by powers of 2' -- I would match syntax with the previous sentence for readability. ('For SVHN, we used M = 32,64,128, 256 and L = 1,2,4,8')\\n\\n'That we were able to train networks at\\nthese large depths is due to the fact that we initialize all W^l_m to the identity' - I feel this sentence could use some explaining\\n\\n'(except once they go beyond 4 layers for CIFAR-10, at which\\npoint they over\\ufb01t, as indicated by the still-decreasing training error).' - actually, the training error goes up as well from M=32, L=8 to M=32, L=16. \\n\\n\\n3.2.1\\n'For both CIFAR-10 and SVHN,\\nperformance increases as the number of layers increases, although there is an upward tick at 8 layers\\nin the CIFAR-10 curves due to over\\ufb01tting of the model.' - It looks like this is not true for 32 features on CIFAR-10.\\n\\n\\nWhy not run linear regressions on the results in figures 6 and 7 to quantify the effects and show how significant/unsignificant these factors are?\"}", "{\"reply\": \"Thank you for your comments. We have updated the paper with some major revisions, and it is now online. Responses to your comments are below.\"}", "{\"title\": \"review of Understanding Deep Architectures using a Recursive Convolutional Network\", \"review\": \"An analysis of deep networks is presented that tries to determine the effect of various convnet parameters on final classification performance. A novel convnet architecture is proposed that ties weights across layers. This enables clever types of analysis not possible with other architectures. In particular, the number of layers can be increased without increasing the number of parameters, thus allowing the authors to determine whether the number of layers is important or whether the number of parameters is important independently. (Normally, these are confounding factors that are hard to judge separately.) Several experiments are proposed that independently vary the number of maps, number of parameters, and number of layers. It is reported that while the number of maps appears to be irrelevant in this setup, the number of layers and number of parameters are very important [more is better!]\\n\\nThis is a pretty unorthodox but interesting form of analysis. I think it is worth highlighting that \\u201cnumber of layers\\u201d experiment, since I didn\\u2019t see immediately how you could do this with another tying strategy. The results confirm our intuitions about important parameters, but also suggest that perhaps weight-tying spatially is one place for improvement.\\n\\nI am a bit worried that there are caveats to this analysis that could be better analyzed. For example, section 3.2.2 shows the \\u201cuntied\\u201d system working better than \\u201ctied\\u201d in 3.2.2, but could this have more to do with the finicky nature of recurrent models (e.g., failure to find a good minimum) than the number of parameters? Section 3.2.3 might implicitly address this: for fixed number of layers and parameters, the tied model performs about the same. If it had performed worse again, we might have been tricked into thinking that the number of maps mattered, when it could have implied that the tied model itself was a worse performer. A bit more clarity about the potential caveats of this analysis and the implications of each experiment would help.\", \"one_experiment_i_was_surprised_not_to_see\": \"holding M and P fixed, compare a tied model with L layers to an untied model with fewer layers [to keep P constant]. This is apparently not what is done in 3.2.1, but might help address my concern above.\", \"pros\": \"Clever, novel analysis of interplay of deep network characteristics and their effect on classification performance.\\n\\nUseful rules of thumb that may benefit future work. Appears to confirm widely-held intuitions about depth and scale of models.\", \"cons\": \"The analysis method might be introducing effects that are not clear (e.g., the effect of using recurrence on the optimization problem).\\n\\nHard to know how these results will transfer to more typical convnets that use max pooling, LCN, etc.\", \"other\": \"In much of the analysis I thought it may be more useful to consider training error as the main metric for comparing networks. At this point, being able to achieve low training error is the main goal of fiddling with the model size, etc. and testing error/generalization is governed by a different bag of tricks [synthetic data, Mechanical Turk, unsupervised learning, extensive cross-validation, bagging, etc.]\"}", "{\"review\": \"We would like to thank all the reviewers and readers for their comments and concerns. In response, we have made many significant revisions, including a new abstract and discussion, and new plots including training error for each experiment. We have responded to each of your comments above in separate replies.\\n\\nThe new version is now available on arxiv.\"}", "{\"title\": \"review of Understanding Deep Architectures using a Recursive Convolutional Network\", \"review\": \"This paper analyzes the effect of different hyper-parameters for a convolutional neural network: the number of convolutional layers (L), the number of feature maps (M), the total number of free parameters in the model (P). The main challenge is the tight relation between these three hyper-parameters. To study the effect of each factor independently, a recurrent architecture is proposed where weights are tied between different convolutional layers so that the number of layers can be varied without changing the total number of parameters. Pooling is only applied to the very first layer but is not applied in any of the tied layers on top.\\n\\nWhile It is important to see experimental papers that offers analysis of the effect of different design parameters for neural networks like this one, weight tying across different convolutional layers is a bit artificial for this task (you scan the whole image at once in each layer). \\n\\nThe main take home message of this paper is that varying the number of feature maps is not important given that the number of free parameters and model depth are held constant. It is important to add experiments/arguments that show if the same conclusion holds when pooling is used for example.\", \"general_comments\": [\"What are the kind of features learned by the tied network vs the untied (normal) one? Does the network work around tying by dedicating some features to be used most of the time per specific hidden layer but not by the others? (as if it is working in an untied regime but with smaller number of feature maps per layer).\", \"Don\\u2019t think table 1 is needed because the paper is not aiming at achieving the best ever accuracy but rather exploring different factors affecting performance.\", \"Regarding writing, the paper gets repetitive at points, for example, the information in table 2 is stated in a paragraph in page 7. The same conclusions are stated in the same way multiple times.\", \"In page 8 \\u201cWe then compared their classification performance after training as described in Section 2.2.\\u201d Is there something specific you mean by \\u201cafter training\\u201d?\"]}", "{\"review\": \"After seeing the updated version, I'm still not sure what the main useful takeaway message is supposed to be. We already know that most hyperparameters have U-shaped performance curves, where they reduce both train and test error for a while and then start to increase test error due to overfitting. I also feel like the analysis method of introducing the new form of recurrent parameter sharing makes the picture cloudy. I think for further exploration in this direction it would make more sense to alter the number of parameters per layer by varying things like the spatial size of the kernels, the rank of the kernels, or to used tiled convolution and alter the tiling range.\"}", "{\"review\": \"I understand that M=1 still has 64 units, but I don't see why you would need 17,379 layers, although I haven't thought about it much. I think it would actually be nice to include in the paper an equation for the total number of parameters (given # of layers, kernels, tied/untied) of a model with the structure you are using.\\n\\nMy point here, though, is not that you should try to test these extreme cases. Rather, I am using it as a thought experiment to make an argument against (my understanding of) your interpretation of the results you present. \\n\\nIt appears to me that you are claiming that the ONLY reason performance decreases sometimes in deeper models (compared with shallower models with the same # of parameters) is because of overfitting. But I think this is incorrect, for two reasons. The M=1 thought experiment is one reason that demonstrates the intuition of my belief, the other (empirical) reason is that the training error increases from M=32, L=8 to M=32, L=16 (not just test error). If the increase in test error in this case were due to overfitting, I would expect the training error to increase as well. Indeed, in the write-up, you state that the decreasing training error demonstrates that overfitting is taking place '(except once they go beyond 4 layers for CIFAR-10, at which point they overfit, as indicated by the still-decreasing training error)', but since the training error is NOT still decreasing (for all cases), this interpretation does not appear correct to me.\"}", "{\"reply\": \"Thanks for your review and suggestions. In response to your comments:\", \"re\": \"training error:\\n\\nWe now include training error for all experiments in the updated version of the paper. As you point out, training error is not encumbered by generalization issues, and offers a clear view into the effects of model size.\\n\\n\\n'holding M and P fixed, compare a tied model with L layers to an untied model with fewer layers [to keep P constant].'\\n\\nAn untied model will have more parameters than a tied model of same M, even reducing L, since each layer adds parameters in the untied case but not the tied one. We compare different L for fixed P using each model in section 3.2.1.\\n\\n\\n'A bit more clarity about the potential caveats of this analysis and the implications of each experiment would help.'\\n\\nThanks for the suggestion, we have significantly revised our discussion to include more on the caveats and implications.\"}", "{\"reply\": \"A major concern was that the claims in our paper are too strong, given the somewhat narrow focus of our experiments. On reflection, we acknowledge this and have rewritten the abstract, introduction and discussion to better reflect this focus. We have also included some new plots which make a clearer case for the central arguments of our paper.\", \"re\": \"contour lines in Fig 2:\\n \\nWe feel it's possible the confusion about these conclusions may have stemmed from trying to distill them from Fig 2, rather than looking at Fig 5, where they are clearly visible. Fig 5 ('Experiment 1b' in the latest version) shows performance for these same models according to number of parameters. We have updated it now to include training error as well as test. Contours in Fig. 2 correspond to vertical cross-sections in Fig 5. The trend that more layers tends to help performance is easily visible in Fig 5.\", \"data_augmentation_and_dropout\": \"This was a tough choice we made when designing the experiments. We wanted to include these initially, but also questioned, if we were to choose some of these regularizations, which ones should we use (dropout, maxout, translations, flips, scale jitters, etc)? These introduced many combinations we opted to exclude. \\n\\nHowever, we feel our experiments still present useful evidence without these regularizations. As Reviewer 975a points out, the key trends are more clearly shown in the training errors, since they are not concerned with generalization, which is affected by the numerous possible regularization approaches. We have thus revised the paper to include plots for both training and test errors.\", \"additional_references\": \"Thank you for these references; we have included them in the related work.\\n\\n\\n'The stated goal is to provide guidance for how to set hyperparameters so that practitioners don\\u2019t have to resort to trial and error. But I don\\u2019t really see anything here that prevents that.'\\n\\nWe don't aim to eliminate the need for trial-and-error, but do feel our experiments provide useful guidelines in helping to inform sizing choices in convolutional layers. The new abstract and introduction explain this better.\\n\\n\\n'The kind of parameter tying considered in this paper is not one that is typically used in practice, at least not for this kind of problem.'\\n\\nAlthough this is not a commonly used tying scheme, we feel it enables a unique study of convolutional layers' performance characteristics. In particular, we can see the effect of varying the number of layers alone, as Reviewer 975a points out. We furthermore find that varying the number of feature maps appears to affect performance predominantly through changing the number of parameters (as opposed to the representation space), which we think is a useful point in helping inform sizing choices.\\n\\n\\n'especially important to tune the learning rate in this work because varying the amount of parameter sharing implies varying the number of gradient terms that affect each parameter'\\n\\nWe chose the hyperparameters using multiple model sizes, picking values that worked well in all cases. For the tied case, we tried dividing the learning rate for the higher layers by the number of layers (i.e. number of gradient terms), as well as not dividing, finding that not dividing the learning rate worked better. As pointed out, there are many different models to run here, and it is impractical to run everything.\\n\\nIn addition, the trends we find are consistent across many combinations of model size and type. While perhaps each individual model might be able to perform a bit better with different settings, we find it very unlikely the overall trends would be much affected by different values.\\n\\n\\n'it\\u2019s not obvious to me that it should be hard to train convolutional rectifier networks at most of these depths. ... Was it only difficult to train the recurrent nets, or also the untied ones?'\\n\\nBoth untied and tied models ran into trouble with zero-centered gaussian initializations at some of the higher depths.\\n\\n\\n'Are the higher layer weights for all of the networks initialized to the identity, or only the ones with tied parameters?'\\n\\nBoth untied and tied models use this initialization.\\n\\n\\n'Have you tried other initializations that don\\u2019t impose such a strong hand-designed constraint, such as James Martens\\u2019 sparse initialization, where each hidden units gets exactly k randomly chosen non-zero incoming weights? '\\n\\nNo, we did not try this, but this is an interesting idea that we will try. Thank you. \\n\\n\\n'I don\\u2019t agree with the claim that the untied model performs better because it has more parameters. ... If the correct story here is overfitting due to too many parameters, then the untied model should do worse.'\\n\\nAdding more parameters helps until it causes more overfitting. The points you mention still appear to have benefitted from the extra parameters. Zero training error is not a hard cutoff that implies there will be more overfitting when parameters are added: Eventually adding parameters will make generalization worse, but it can also still improve performance for some time (as demonstrated by the majority of results in fig. 2).\\n\\n\\n'pad the hidden layer with zeros'; 'the word \\u201csame\\u201d in quotes'\\n\\nYes, we mean that the edges are padded with zeros; this has been clarified in the newer version.\"}" ] }
PiMICQ7tbB-Aa
Distinction between features extracted using deep belief networks
[ "mohammad pezeshki", "Sajjad Gholami", "Ahmad Nickabadi" ]
Data representation is an important pre-processing step in many machine learning algorithms. There are a number of methods used for this task such as Deep Belief Networks (DBNs) and Discrete Fourier Transforms (DFTs). Since some of the features extracted using automated feature extraction methods may not always be related to a specific machine learning task, in this paper we propose two methods in order to make a distinction between extracted features based on their relevancy to the task. We applied these two methods to a Deep Belief Network trained for a face recognition task.
[ "features", "methods", "distinction", "task", "important", "step", "many machine", "algorithms" ]
submitted, no decision
https://openreview.net/pdf?id=PiMICQ7tbB-Aa
https://openreview.net/forum?id=PiMICQ7tbB-Aa
ICLR.cc/2014/conference
2014
{ "note_id": [ "bblc1Mh3mjbQy", "d1q9dFL09p1R-", "77T27E5P2DtLx", "GscFI-qeFQsk_" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1391933580000, 1390945860000, 1391933580000, 1391475240000 ], "note_signatures": [ [ "anonymous reviewer 0067" ], [ "anonymous reviewer 0df6" ], [ "anonymous reviewer 0067" ], [ "anonymous reviewer 29d8" ] ], "structured_content_str": [ "{\"title\": \"review of Distinction between features extracted using deep belief networks\", \"review\": \"The submission investigates how to distinguish between relevant and irrelevant features for a given task, of a given trained model. The context seems to be that of face recognition. The proposed methods of analysis are not explained particularly well -- the \\u201cmethod of variances\\u201d and \\u201cmethod of relative activities\\u201d sections would benefit from some equations that explain the actual details on a more-than-intuitive level.\\n\\nThe experimental setup is rather unique (and strange) -- it looks like the authors are training on a dataset of faces corrupted by digit images, and then presenting either images or digits to measure the relative importance of the features using the two methods exposed in Section 3. The thresholds in 4.1 and 4.2 seem rather arbitrary -- any particular insights into how the results change as they change? Figures 2-a and 2-b, referenced in the same sections, seem non-existent, too, unless the authors refer to Figure 3 instead?\\n\\nFigure 3 is confusing and not sure it adds much value to the paper. While Figure 4 has some interesting results, in and of itself it does not make a paper. With some more analysis as to why and how their methods work, this body of work could be interesting to the community, as a way to analyze a trained model. But as it stands, it\\u2019s unclear whether their results are simply an artefact of the actual training data being bimodal and thus the hidden units modeling that efficiently...\"}", "{\"title\": \"review of Distinction between features extracted using deep belief networks\", \"review\": \"Two methods are proposed for making a distinction b/t extracted features based on relevancy. The first method is looking at the variance of hidden nodes when inputs vary on some aspects but stay the same in others. The second method is looking at the relative activity of the hidden nodes when additional input features are subsequently added. The hope is to identify some \\u201cGrandmother\\u201d like cells in a DBN which would be selective of faces and digits.\\n\\nOverall I think the paper is somewhat interesting in it\\u2019s motivation of probing the DBN to see the meaning of each hidden layer units. However, DBNs are notoriously hard to interpret due to the fact that it is a distributed representation. While i think the paper is in the right direction, the method presented are very simplistic and it is not clear what can be concluded. In particular, it is somewhat expected that if we remove the nodes which are modified by when a face image changes a face+digit, then what remains can better reconstruct the face. Is training performed sequentially on the three datasets 1,2, and 3?\\n\\nAnother issues is that the digit is just so much brighter than the face, making this problem easier, what happens if you use the digit with the similar intensity as the face? my suspicion is that the DBN would have a lot of trouble with that.\\n\\nPossible way to make this a good paper would be to use the hidden node discovered as a form of detector. It would perform worse than a discriminatively trained detector but it would be interesting to see a comparison.\"}", "{\"title\": \"review of Distinction between features extracted using deep belief networks\", \"review\": \"The submission investigates how to distinguish between relevant and irrelevant features for a given task, of a given trained model. The context seems to be that of face recognition. The proposed methods of analysis are not explained particularly well -- the \\u201cmethod of variances\\u201d and \\u201cmethod of relative activities\\u201d sections would benefit from some equations that explain the actual details on a more-than-intuitive level.\\n\\nThe experimental setup is rather unique (and strange) -- it looks like the authors are training on a dataset of faces corrupted by digit images, and then presenting either images or digits to measure the relative importance of the features using the two methods exposed in Section 3. The thresholds in 4.1 and 4.2 seem rather arbitrary -- any particular insights into how the results change as they change? Figures 2-a and 2-b, referenced in the same sections, seem non-existent, too, unless the authors refer to Figure 3 instead?\\n\\nFigure 3 is confusing and not sure it adds much value to the paper. While Figure 4 has some interesting results, in and of itself it does not make a paper. With some more analysis as to why and how their methods work, this body of work could be interesting to the community, as a way to analyze a trained model. But as it stands, it\\u2019s unclear whether their results are simply an artefact of the actual training data being bimodal and thus the hidden units modeling that efficiently...\"}", "{\"title\": \"review of Distinction between features extracted using deep belief networks\", \"review\": \"This work proposed two methods to make a distinction between features learned by DBN. The authors seemed to imply that DBN could be capable of learning compositional features from the training data. However, I do not think DBN can learn those features easily.\\nThe method of variances is kind of counter-intuitive. While a average face feature could be activated by most of face images, the variance of its activation could be very small. According this method, the average face feature would be seen as a non-face node.\"}" ] }
BId1QQRE1gQLZ
Multi-View Priors for Learning Detectors from Sparse Viewpoint Data
[ "Bojan Pepik", "Michael Stark", "Peter Gehler", "Bernt Schiele" ]
While the majority of today's object class models provide only 2D bounding boxes, far richer output hypotheses are desirable including viewpoint, fine-grained category, and 3D geometry estimate. However, models trained to provide richer output require larger amounts of training data, preferably well covering the relevant aspects such as viewpoint and fine-grained categories. In this paper, we address this issue from the perspective of transfer learning, and design an object class model that explicitly leverages correlations between visual features. Specifically, our model represents prior distributions over permissible multi-view detectors in a parametric way -- the priors are learned once from training data of a source object class, and can later be used to facilitate the learning of a detector for a target class. As we show in our experiments, this transfer is not only beneficial for detectors based on basic-level category representations, but also enables the robust learning of detectors that represent classes at finer levels of granularity, where training data is typically even scarcer and more unbalanced. As a result, we report largely improved performance in simultaneous 2D object localization and viewpoint estimation on a recent dataset of challenging street scenes.
[ "detectors", "priors", "sparse viewpoint data", "viewpoint", "data", "majority", "today", "object class models", "bounding boxes", "richer output hypotheses" ]
submitted, no decision
https://openreview.net/pdf?id=BId1QQRE1gQLZ
https://openreview.net/forum?id=BId1QQRE1gQLZ
ICLR.cc/2014/conference
2014
{ "note_id": [ "MxgLMMu8Ldj3S", "kJUPkk5LtOkjZ", "gJViuDG4cAJN1", "OaWGaOhCfpaFT", "qZMRBqtNdeuCz", "igPNrjZACmi9Q" ], "note_type": [ "review", "comment", "review", "review", "comment", "comment" ], "note_created": [ 1391575620000, 1392730680000, 1391819880000, 1391504340000, 1392730440000, 1392730560000 ], "note_signatures": [ [ "anonymous reviewer 8fcf" ], [ "Bojan Pepikj" ], [ "anonymous reviewer 138a" ], [ "anonymous reviewer 42a4" ], [ "Bojan Pepikj" ], [ "Bojan Pepikj" ] ], "structured_content_str": [ "{\"title\": \"review of Multi-View Priors for Learning Detectors from Sparse Viewpoint Data\", \"review\": [\"Summary\", \"This paper proposes to improve multi-view object detection and viewpoint estimation, particularly in the case where some viewpoints are undersampled, by introducing a multi-view prior into the standard SVM training framework used to learn many HOG-based object detectors. The work extends Gao et al. [16], which considers a specific case of the more general form of priors considered in this paper. (A citation is missing for a very relevant paper by Hariharan et al. [a] which also estimates covariances between HOG cells.) The paper presents an extensive empirical study of two newly proposed priors (SVM-Sigma and SVM-MV) compared with the SVM-SV prior of Gao et al. [16] and a standard SVM with no multi-view prior.\", \"The experimental results are dense and at times hard to parse, but SVM-Sigma shows a clear advantage on several benchmarks.\", \"Pros\", \"The topic is quite interesting and relevant to researchers working on object detection and coarse viewpoint estimation.\", \"The outline of the approach is clear.\", \"The experimental results look quite good.\", \"Cons / Questions for author feedback\", \"While the outline of the approach is clear, some of the details are hard to follow. A main confusion throughout the paper is what data is used to estimate the prior? For example, when the target is KITTI and the prior comes from 3D-Objects, are only the car objects used from the 3D-Objects dataset (I would assume so, but this was unclear to me).\", \"For the MV prior \\u201c...we first establish pairs of cells in the target model which satisfy a certain relation type ~n.\\u201d It\\u2019s clear how these pairs would be established when CAD data is used. How are these correspondences established in the case of KITTI data?\", \"Sec. 3.2: I think it would be good to clarify what is meant by \\u201cbootstrapping\\u201d (i.e. training multiple models on bootstrapped samples) to avoid confusion with the ill-named hard negative \\u201cbootstrapping\\u201d for training SVMs.\", \"Is setting a single value for C reasonable? The regularization (via the prior) is changing quite a bit and it\\u2019s not clear that keeping C constant is reasonable. That said, searching over C can only improve the already good results.\", \"In Table 2 the \\u2018base\\u2019 SVM-Sigma results are the same for 3D-Objects and KITTI priors---perhaps a bug in the table?\", \"[a] Bharath Hariharan, Jitendra Malik and Deva Ramanan. Discriminative decorrelation for clustering and classification. In ECCV 2012.\"]}", "{\"reply\": \"We would like to thank reviewer 138a for his valuable comments.\\n\\nThe reviewer expresses his concern about one of the main assumptions\\nin the paper - the use of a separate detector for each subclass and\\nview. Regarding this, we would like to point out two things:\\n\\nFirst, a fair number of state-of-the-art methods for viewpoint\\nestimation and object localization belongs to this category. For\\nexample, see references [25,28,34,40], as well as Tab. 1. They are all\\ndetectors which assume a dedicated detector per viewpoint.\\n\\nSecond, our results confirm that having a detector per subclass and\\nper view (more specific detectors) indeed leads to better performance\\n(Tab. 3, results for the baseline SVM (KITTI)): the car-type SVM\\ndetector is consistently better than the base SVM detector. Thus the\\nusage of more specific detectors is justified. Naturally, as we go\\nmore specific, the data distribution across viewpoints becomes scarce\\n- to compensate for that, we successfully employ stronger and\\nstructured regularization (SVM-Sigma). This is confirmed in Tab. 3:\\nSVM-Sigma is better than SVM in all comparable settings.\\n\\nAnother concern is the usage of prior geometric knowledge. In this\\nwork we model the geometric structure for a reason. We assume and\\nbelieve that there is an underlying cause (object geometry) which\\ndrives the appearance variation and changes of the object. Rather than\\nletting the method discover this structure, we explicitly model it (to\\na certain degree) in our work. How one would apply this to deep NNs\\nis indeed an open and interesting question.\\n\\nRegarding the C parameter, for the baseline (SVM) we followed the\\nsuggestions of [12] and [13] and used the value C = 0.002. For the\\nproposed method, we ran experiments with varying amount of data per\", \"viewpoint_and_different_values_of_c_and_we_observed_two_things\": \"firstly, C = 0.002 is optimal value in most of the cases or it is very\\nclose to the optimal and secondly, the performance is stable in the\\nrange C*[0.1, 10], therefore we chose the same value for all the\\nmodels. We are happy to include those results in the supplemental\\nmaterial. Obviously one should do k-fold cross validation on all the\\ntunable parameters jointly, but due to the scarce training data, large\\nsearch space, costly training time and extensive experiments, doing\\nthe proper k-fold cross validation is time consuming and prohibitive.\"}", "{\"title\": \"review of Multi-View Priors for Learning Detectors from Sparse Viewpoint Data\", \"review\": \"The paper presents a method to learn a quadratic regularizer that improves the performance of multi-view object detectors when very little training data is available for each object or view. The regularizer is computed using a sparse correlation matrix to identify similarity amongst feature weights in the detectors for different views of the same object (in some cases relying on a 3D model to help establish correspondence between features). The similarity matrix is then used to define a Laplace regularization term for a standard SVM, thus requiring that a new multi-view detector trained with the regularizer share similar structure across views. Results are presented on several detection tasks using the KITTI (driving/urban) dataset and 3D Object Classes dataset. It is shown that the main advantage is that the method can learn to detect views of novel target objects even when some views of these objects have few or no training examples.\\n\\nOverall the paper is clearly written and the experiments are extensive. The authors broke out all of the different cases [number of examples per view, and which views are missing] to make their point. In the regime where the method is intended to help (where very few examples are available for a particular view of a target object) the regularizer does help for the multi-view DPM model considered in the paper.\\n\\nThe main caveat here is that the method seems to assume the use of separate detectors for every subclass and view (and thus each detector requires its own dataset). That really exacerbates the problem of too little data where other methods might not have an issue. This is a nice trick, but since the one-detector-per-object-per-view approach has multiple scalability issues, it looks like the proposed solution also suffers from those barriers. If this approach to multi-view detection is not workable in the near future, what is the high-level idea that we should take from this work? I do not see it.\\n\\nAlso, the \\u201cdense\\u201d sparsity pattern appears to be by far the best performer. This is somewhat interesting by itself, but also detracts from the \\u201csparse\\u201d proposals in the paper (and, of course, the dense approach is not very scalable). Some candid discussion on the consequences of this result might clarify what parts of the system are important.\", \"pros\": \"A relatively simple idea to exploit knowledge of regularity across views in multi-view detectors.\\nPrimary value is in cases where there are few or no examples for a particular view, in which case it does appear to help over baseline approaches.\", \"cons\": \"Relies on a fair amount of prior geometric knowledge.\\nIt is unclear how to apply this to cases where the number of features is much larger than DPM-style models [e.g., deep neural nets] or has deeper difficult-to-interpret layers where geometric information cannot be leveraged.\", \"other\": \"The SVM regularization penalty might need to be tuned for complete fairness of comparisons. Since the regularizer is fundamental to performance and the number of training examples is varied, the penalty setting could alter the testing numbers. (It is possible that the experiments/implementation are set up in such a way that this does not matter much; a note to this effect, if true, would be helpful.)\"}", "{\"title\": \"review of Multi-View Priors for Learning Detectors from Sparse Viewpoint Data\", \"review\": \"This work aims to perform simultaneous detection and viewpoint estimation in the face of having few training examples for many classes or viewpoints, but where many examples are available for a smaller or related set of source instances. To accomplish this, two new types of weight regularization are described, for use with deformable parts models. Both regularizers form a quadratic penalty on the weights by means of a covariance matrix, constructed using models trained on the sources. The first, SVM-MV, constructs the covariance by averaging across all pairs of HoG cells that overlap between adjacent object viewpoints; overlaps are found by projecting to a (provided or guessed) 3D object model. The second, SVM-Sigma, constructs an explicit all-to-all covariance of the weights by averaging across different model instances. SVM-MV extends the ideas of Gao et al. to work across views, while SVM-Sigma breaks from these careful constructions and uses all pairwise interactions that arise. The authors evaluate their effectiveness using two datasets, 3D-objects and KITTI, concluding that such regularization enables good performance in these tasks, with SVM-Sigma generally outperforming the other methods.\\n\\nUnfortunately, the exposition is rather dry and can be hard to follow. Illustrative examples and diagrams would help a lot here, though, particularly sketches of the projections and overlaps for SVM-MV. I think it also would help to instantiate model early on, using one of the datasets from the experiments (i.e., in sec 3, linking sim_n, w^s, w^t to concrete instances).\\n\\nI'm also a bit confused on what exactly comprised the source vs target data. For sec 4.1 (3D-objects), how was the data divided between sources (used for the priors) and the targets? Was there any overlap between these, either by datapoint or by object instance? For sec 4.2, p.8 para 1 seems to say that for KITTI, the priors were trained from source data drawn from either from 3D-objects or KITTI (i.e. there are two different cases). In the former case, did viewpoints need to be mapped to transfer between the datasets, and which object classes were used? In the latter case, did the prior data overlap the target data?\", \"pros\": [\"Presents new regularizers that exploit structure relations, learned from the data in cases where there are dense subsets or aggregates\", \"Experiments are detailed\"], \"cons\": [\"Dry and hard to grasp\", \"Could use more illustrations of the method and problem setup\", \"Results presentation confusing at times\"], \"minor_comments\": [\"Fig 1 right, TD2ND: the labels along the rows (y-axis) appear swapped: The text indicates the block with ones should connect with-data to no-data.\", \"I found the italics on occurrences of 'target' and 'source' somewhat distracting; it tended to take my eyes away from the parts of paragraphs I wanted to concentrate on much of the time.\", \"p.5 last para: says there are 9 object classes, but the webpage for this dataset says there are 8?\", \"p.6 (Experiments sec): k in {1,5,10,all} -- would be nice if this said how many 'all' is as well.\", \"Fig 2: figures could use titles\", \"Fig 3: I'm confused about which views were included/excluded for each plot -- are the included views progressive subsets? It looks like the differences are more than that. Maybe a key with on/off bitmap listing each view would help.\"]}", "{\"reply\": \"We would like to thank the reviewer for his valuable comments. We've\\nincluded all suggestions made by the reviewer; they can be found in\\nthe newest paper version on arXiv. Specifically, we added model\\nvisualizations in the supplemental material section (Fig. 7), as well\\nas we improved Fig. 3 and introduced bars visually explaining the\\ntraining setups. Additionally, we incorporated the technical\\nsuggestions proposed by the reviewer.\", \"answers_to_specific_questions\": \"In Sec. 4.1 (3D object classes dataset) each class consists of 10\\ninstances, depicted from 8 different viewpoints, 3 different scales\\nand heights. We use 5 instances for training, 5 for testing, resulting\\nin 360 images per train and test set. During source model training,\\nfrom the training set, we sample 15 images per viewpoint. During\\ntarget model training, we sample K = {1, 5, 10} images per\\nviewpoint. This is done for each class separately. We didn't strictly\\nenforce the training data to be non-overlapping among the source and\\ntarget models.\\n\\nIn Sec 4.2 (KITTI), when learning the priors from 3D Object Classes,\\nwe used only the car class. As KITTI and 3D Object Classes are\\ndifferent datasets, there was no data overlap among the training sets\\nfor the source and target models. The viewpoint annotations had to be\\nmapped among the two datasets, which is rather trivial to do. When\\nusing KITTI data only, the source and the target training data are at\\na different level in the class hierarchy (e.g. target data at car type\\nlevel, while source data at car class level), therefore it might\\nhappen that the source and the target data overlap.\\n\\nRegarding the 3D Object Classes dataset, it actually comes with 10\\nclasses, all previous work excludes the monitor class from the\\nexperiments, while the head class is included in more recent work,\\nthus the 9 classes.\\n\\nIn the experiments section, k = all means that all training data for\\nthe subordinate category has been used. The amount of training data\\nvaries across the subordinate classes. Figures 4, 5 and 6 provide\\ntraining data distributions per class.\"}", "{\"reply\": \"We would like to thank the reviewer for the valuable comments. We\\nuploaded a new paper version and we included the prior work [a].\", \"regarding_the_questions\": \"- Indeed we used the car class from the 3D Object classes dataset to learn\\n the 3D object priors that are later on used to train target class\\n detectors on KITTI.\\n\\n- On both KITTI and 3D Object Classes, we use CAD data to establish the\\n pairs of corresponding cells across views which are used in the case\\n of SVM-MV.\\n\\n- 'Bootstrapping' refers to the method from classical statistics\\n where the data is re-sampled multiple times in order to provide an\\n estimate of the underlying distribution. Specifically, we train N\\n source models by sub-sampling K positive training examples from the\\n training set for each of the source models.\\n\\n- Regarding the C parameter, for the baseline (SVM) we followed the\\n suggestions of [12] and [13] and used the value C = 0.002. For the\\n proposed method, we ran experiments with varying amount of data per\", \"viewpoint_and_different_values_of_c_and_we_observed_two_things\": \"firstly, C = 0.002 is optimal value in most of the cases or it is\\n very close to the optimal and secondly, the performance is stable in\\n the range C*[0.1, 10], therefore we chose the same value for all the\\n models. We are happy to include those results in the supplemental\\n material. Obviously one should do k-fold cross validation on all the\\n tunable parameters jointly, but due to the scarce training data,\\n large search space, costly training time and extensive experiments,\\n doing the proper k-fold cross validation is time consuming and\\n prohibitive.\\n\\n- Table 2 is correct. The models are trained differently, but\\n they result in very similar performance.\"}" ] }
MMG-yUjRFZqpn
A Simple Model for Learning Multilingual Compositional Semantics
[ "Karl Moritz Hermann", "Phil Blunsom" ]
Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. In particular, recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning compositional representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used.
[ "representations", "simple model", "multilingual compositional semantics", "languages", "model", "meaning", "natural way", "covariance relationships", "words", "phrases" ]
submitted, no decision
https://openreview.net/pdf?id=MMG-yUjRFZqpn
https://openreview.net/forum?id=MMG-yUjRFZqpn
ICLR.cc/2014/conference
2014
{ "note_id": [ "ONvRZkPuanO6s", "JFuEJvPoL0JVk", "ddHnw4OEnrw3R", "kr2Dk3gUX7iFy", "_6Xb6-WgbC_Yx", "OOhePdBmJiPK8", "KvQCK0VJuylci" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1393032360000, 1391787780000, 1392662760000, 1391573040000, 1392555240000, 1393364940000, 1393247760000 ], "note_signatures": [ [ "Karl Moritz Hermann" ], [ "anonymous reviewer 9aaf" ], [ "Karl Moritz Hermann" ], [ "anonymous reviewer 57c7" ], [ "anonymous reviewer 048b" ], [ "Karl Moritz Hermann" ], [ "Ryan Kiros" ] ], "structured_content_str": [ "{\"review\": \"An updated version has now been submitted to ArXiv. I'm not entirely sure how long it will require until it will be publicly viewable. We've changed the title to 'Multilingual Distributed Representations without Word Alignment' in response to the questions raised about the compositional nature of this model.\\n\\nThe other points have been addressed as stated in my previous post.\"}", "{\"title\": \"review of A Simple Model for Learning Multilingual Compositional Semantics\", \"review\": \"This paper proposes a simple model to learn word embeddings in a\\nbilingual setting. The model learns embeddings at the sentence-pair\\nlevel, where aligned sentences are similarly represented. This simple\\nmodel does not rely on word alignments or MT system.\\n\\nThe paper is well written and presents convincing results. My only\\nconcern is why the authors use the term 'Models of Compositional\\nDistributed Semantics' for a model that is more related to annotation\\ntransfer. Moreover, the continuous representation of a sentence is the\\nsum of word embeddings. This more related to a bag of word model than\\na model that can handle compositionality. A minor remark about the\", \"results\": \"the authors observe that BICVM+ outperforms the BICVM model\\nwhen training on English data, but performs worse in the opposite\\ndirection. Could you comment on that.\\n\\nThe authors could read the paper of Lauly et al. at the last NIPS\\nworkshop on Deep Learning. This is clearly related to this\\nwork. Finally, I wonder what could be the performances with more\\ncomplex model. For instance, the same authors proposed a translation\\nmodel based on recurrent Net that could be used for this task.\"}", "{\"review\": \"We would like to thank the anonymous reviewers for their thorough reviews and comments regarding our submission.\\n\\nSeveral reviewers took issue with our use of the term 'compositional semantics' in relation to an additive composition function (i.e. a bag-of-words model). I agree with reviewer 57c7, who pointed out that such models have been described as compositional in the past, but that the term might be misleading in light of more recent development in compositional semantics. I will update the paper to improve the wording regarding this term, and to clarify the limitations of our model versus fuller compositional approaches.\\n\\nAs pointed out by reviewers 9aaf and 048b, it would be interesting to see how our objective function performs in combination with a more complex model. I agree that the current BoW model is likely to be insufficient for tasks such as machine translation. The key contribution of the current submission are two: first, learning word embeddings without word alignment, and second the multilingual objective function that we use to achieve this and its parallel use for semantic transfer. I agree that it would be interesting to develop this further by using more complex composition models and also by attempting other tasks such as machine translation - we are currently studying these possibilities for further work.\\n\\nAlso, thank you for pointing out the Lauly et al. paper from the NIPS workshop; this seems indeed very relevant and I will incorporate and reference this paper accordingly.\", \"to_answer_the_question_raised_about_sampling_by_reviewer_57c7\": \"we sampled negative examples at every epoch. We will update the paper to address this and the other minor remarks made in the three reviews, and submit an updated paper to ArXiv in the next few days.\"}", "{\"title\": \"review of A Simple Model for Learning Multilingual Compositional Semantics\", \"review\": \"The paper considers learning cross-lingual representations of words using parallel data aligned at the level of sentences. A representation of a sentence is just a sum of embeddings of words in the sentence. These representations for pairs of sentences are learned to be similar. Specifically, a ranking objective is used: for every sentence x in L1, the representation of the aligned sentence in L2 should be closer to the representation of x than to representations of (a random sample of) other sentences in L2. The resulting representations are used in transferring a document classifier across languages (without retraining - i.e. so called 'direct transfer'). Interestingly, the results are (mostly) better than these with cross-lingual word embeddings of Klementiev et al. (2012) learned using automatically word aligned sentences (as well as significantly better than a machine translation baseline -- which applies the classifier to automatically translated documents).\\n\\nI find the paper quite interesting and well written. The results are fairly impressive as well. \\n\\nHowever, I am not entirely convinced that calling this 'multilingual compositional semantics' is very appropriate. Though I agree that a more complex compositional model is probably not necessary for the document classification task (after all, a unigram model achieves competitive results on classifying RCV documents), it seems a bit misleading to call a bag-of-words approach compositional. After all, Klementiev et al. also sum word representation to yield a representation of a document. (However, such added compositional model, of course, have been considered in the past and called compositional -- e.g., Mitchell and Lapata (2008)). From my perspective, the interesting aspect here is learning word representation without using word alignment information. In this way, the work is similar to the paper of Lauly et al. presented at the NIPS Deep Learning workshop (http://arxiv.org/abs/1401.1803). However, their results are not directly comparable as they used a different test set. My concern is that a different learning objective would be needed if more expressive compositional models are used (perhaps combining both similarity across languages and the reconstruction error as in Socher (EMNLP 2011)).\", \"minor\": \"-- it would be interesting to see results on other language pairs (e.g., French was already used in training, but not in testing, even though RCV contains articles in French)\\n-- also I am wondering how performance varies depending on the size of parallel data (as this might be a concern for low resource languages where direct transfer approaches are especially attractive)\\n-- It was not entirely clear how negative examples are sampled (formula 6): are they chosen at every epoch (as, e.g., in Rendle et al. (UAI 2009)), or chosen once for every sentence pair and then kept fixed during training?\\n-- section 2.1, par 1: 'multi-agent learning' -> 'multi-task learning' ?\"}", "{\"title\": \"review of A Simple Model for Learning Multilingual Compositional Semantics\", \"review\": \"This paper introduces an interesting model to learn single word vector embeddings for 2 languages simultaneously.\\nIt is applied to a classification task.\\n\\nThe paper is very clear and well written.\", \"it_does_not_seem_to_actually_learn_compositional_semantics_in_the_usual_sense\": \"\", \"http\": \"//en.wikipedia.org/wiki/Principle_of_compositionality\\nPrinciple of Compositionality is the principle that the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them.\\n\\nCertainly, averaging all words in a bag of words is not a compositional rule that would allow people to retrieve the meaning. From wikipedia: \\n'The principle of compositionality states that in a meaningful sentence, if the lexical parts are taken out of the sentence, what remains will be the rules of composition. Take, for example, the sentence 'Socrates was a man'. Once the meaningful lexical items are taken away\\u2014'Socrates' and 'man'\\u2014what is left is the pseudo-sentence, 'S was a M'. The task becomes a matter of describing what the connection is between S and M.'\\nThe connection between S and M would not be retrievable from a bag of words representation.\\n\\nOn a related note, the model could not be used for (presumable the final goal of) machine translation in its current form.\\n\\nIt would be great to see a comparison with the work from the same lab of Kalchbrenner and Blunsom.\\n\\nDespite its problems, it seems an interesting paper.\"}", "{\"review\": \"Dear Ryan,\\n\\nThank you for your comments on our submission. I think you are raising a valid point concerning the dimensionality of our embeddings - this is something we should have explained better in the paper. Effectively, we started using d=128 in the beginning (chosen somewhat arbitrarily based on prior work on distributed representations). Subsequently, we tried tuning this parameter a little bit, but it didn't make much of a difference and so we stuck with the original setting.\\n\\nFollowing your comments I have re-run all experiments with d=40. The results for this are as follows (matching Table 1):\\n\\nBiCVM 83.7 71.4\\nBiCVM+ 86.2 76.9\\n\\nIf you think this would be useful, we are happy to include these and other results with changing dimensionalities in the paper.\\n\\nBest\\nKarl Moritz\"}", "{\"review\": \"The experiments of Klementiev et al. use d=40 dimensional word embeddings. How come you chose to use d=128? How do we know your improvements over Klementiev et al. are not just from using 3x the embedding dimensionality? I'm not sure that this is a fair comparison.\"}" ] }
gg4nKrblw0gkf
Bounding the Test Log-Likelihood of Generative Models
[ "Yoshua Bengio", "Li Yao", "KyungHyun Cho" ]
Several interesting generative learning algorithms involve a complex probability distribution over many random variables, involving intractable normalization constants or latent variable normalization. Some of them may even not have an analytic expression for the unnormalized probability function and no tractable approximation. This makes it difficult to estimate the quality of these models, once they have been trained, or to monitor their quality (e.g. for early stopping) while training. A previously proposed method is based on constructing a non-parametric density estimator of the model's probability function from samples generated by the model. We revisit this idea, propose a more efficient estimator, and prove that it provides a lower bound on the true test log-likelihood, and an unbiased estimator as the number of generated samples goes to infinity, although one that incorporates the effect of poor mixing (making the estimated likelihood worse, i.e., more conservative).
[ "test", "quality", "model", "generative models", "complex probability distribution", "many random variables", "intractable normalization constants", "latent variable normalization", "analytic expression" ]
submitted, no decision
https://openreview.net/pdf?id=gg4nKrblw0gkf
https://openreview.net/forum?id=gg4nKrblw0gkf
ICLR.cc/2014/conference
2014
{ "note_id": [ "xJqhJ8AlT4JyR", "2xMtxps-6j3_7", "Zem0HS_9PVHWt", "YAwpMHz_OUAvC", "kk5t-iaKmykSQ", "wwlZDuMrHTDr6", "XnKCX8DECzXS1" ], "note_type": [ "comment", "review", "comment", "review", "review", "review", "comment" ], "note_created": [ 1391971980000, 1391861580000, 1391971980000, 1391971980000, 1391849220000, 1392141360000, 1392710760000 ], "note_signatures": [ [ "KyungHyun Cho" ], [ "anonymous reviewer 0661" ], [ "KyungHyun Cho" ], [ "KyungHyun Cho" ], [ "anonymous reviewer 60ea" ], [ "anonymous reviewer 16f7" ], [ "KyungHyun Cho" ] ], "structured_content_str": [ "{\"reply\": \"Dear Reviewer (60ea),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'When would the authors expect the biased CSL method to be useful in practice?'\\n\\nAs shown in Fig. 1, the biased CSL well reflects the ordering of the performances of different models correctly, however, optimistic. This is true even in the case where only a single MCMC step was taken from each test sample. As stated in Sec. 6, we believe the biased CSL will be useful in comparing models when there is no alternative way to compute/approximate the log-likelihood (such as GSN).\\n\\n'which one is closer to the truth?'\\n\\nAs you have correctly mentioned, it is not possible to answer this exactly for those models in Table 1. However, the result in Table 2 suggests that with enough samples, both the CSL and the estimate using AIS approach the true log-likelhood closely.\\n\\n'Is there a reason that GSN is so much better?'\\n\\nOne important factor affecting the CSL estimate is the 'mixing' rate of MCMC chain. As has been shown earlier, the MCMC sampling by GSN mixes among different modes very quickly, potentially leading to more accurate CSL estimate with less number of samples. \\n\\n'an RBM trained with PCD is thought to have much better likelihood than an RBM trained with CD. Is this reflected in CSL estimates?'\\n\\nAs the samples generated from an RBM trained with CD are generally bad (most of them tend to be from suprious modes), we believe this will be well reflected in CSL estimates. Also, our preliminary experiment with CD revealed the same tendency (not included in the paper).\\n\\nThank for you for pointing out typos in the paper. We will correct them in the next version.\"}", "{\"title\": \"review of Bounding the Test Log-Likelihood of Generative Models\", \"review\": \"The paper proposes a method for estimating the log probability of any probabilistic\\nmodel that can generate samples. The method builds a local density estimator around\\nthe samples using the model's conditional probability, \\nwhich is used to evaluate the log probability of a test set. An important\\nselling point of the method is that it evaluates the probabilistic model and the \\nsampling procedure jointly, and that it is asymptotically consistent, in the sense\\nthat the estimates converge to the true likelihood as the number of samples approaches\\ninfinity.\\n\\nThis work is quite novel, and it places the idea of used by Breuleux et al. in a \\nrigorous framework. Empirically, the method works well on small models, although\\nit exhibits very substantial divergence from AIS on larger models, as shown in Table 1.\\n\\nPerhaps the greatest weakness of this method, which is worth discussing, is that\\nthe number of samples that's needed in order to accurate compute a log probability\\ngrows exponentially with the entropy of the distribution. For an example, consider\\nthe dataset consisting the concatenations of of 10 randomly chosen MNIST digits. It is \\nfairly clear any sample set <<< 10^10 will vastly underestimate the log probability\\nof a perfectly good sample. That is unfortunate, because it means that the method will\\nnot work well on complicated models of high entropy distributions, such as images or speech. \\n\\nThis weakness notwithstanding, the method is very adequate for model comparison.\", \"to_summarize\": \"Pros: interesting method for obtaining conservative underestimates of the log probability,\\nworks with nearly any model.\", \"cons\": \"method's complexity is exponential in the distribution's entropy; the proposed fix is no longer\\nconservative.\"}", "{\"reply\": \"Dear Reviewer (0661),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'it places the idea of used by Breuleux et al. in a rigorous framework'\\n\\nWe agree that the proposed method is closely related to that by Breuleux et al. However, we claim that it is an improvement over the method by Breuleux et al. in two ways:\\n\\n(1) the CSL is more efficient because each sample of latent variables h can cover many x's.\\n(2) the CSL does not have any tuning parameter such as bandwidth.\\n\\n'it exhibits very substantial divergence from AIS on larger models, as shown in Table 1.'\\n\\nAs you have pointed out earlier in your review, the proposed CSL estimator does not only evaluate the model itself but also the sampling procedure. When 'mixing' by MCMC sampling is fast (as in GSN), the CSL estimate tend to converge quickly, while in the opposite case (as in RBM and DBN), the convergence is slow. \\n\\n'the number of samples that's needed in order to accurate compute a log probability grows exponentially with the entropy of the distribution'\\n\\nThis is true, but one thing to be noted is that we are not aware of any alternative approach that does not suffer from this problem, when an approach such as AIS is not applicable. Furthermore, we believe the fact that the CSL estimator uses the samples of latent variables h which are in a more abstract space than the raw input space, may make sampling-based methods such as the proposed CSL greatly reduce the curse of dimensionality.\\n\\nNonetheless, we agree that this is where future work lies, and we are indeed currently exploring different ways of exploiting the presence of a high-level (deep) representation to make the problem of likelihood estimation much easier and its convergence faster. Much more work is needed before these new ideas can be proven right and this paper should instead be judged in comparison with the past published work.\"}", "{\"review\": \"Dear Reviewer (0661),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'it places the idea of used by Breuleux et al. in a rigorous framework'\\n\\nWe agree that the proposed method is closely related to that by Breuleux et al. However, we claim that it is an improvement over the method by Breuleux et al. in two ways:\\n\\n(1) the CSL is more efficient because each sample of latent variables h can cover many x's.\\n(2) the CSL does not have any tuning parameter such as bandwidth.\\n\\n'it exhibits very substantial divergence from AIS on larger models, as shown in Table 1.'\\n\\nAs you have pointed out earlier in your review, the proposed CSL estimator does not only evaluate the model itself but also the sampling procedure. When 'mixing' by MCMC sampling is fast (as in GSN), the CSL estimate tend to converge quickly, while in the opposite case (as in RBM and DBN), the convergence is slow. \\n\\n'the number of samples that's needed in order to accurate compute a log probability grows exponentially with the entropy of the distribution'\\n\\nThis is true, but one thing to be noted is that we are not aware of any alternative approach that does not suffer from this problem, when an approach such as AIS is not applicable. Furthermore, we believe the fact that the CSL estimator uses the samples of latent variables h which are in a more abstract space than the raw input space, may make sampling-based methods such as the proposed CSL greatly reduce the curse of dimensionality.\\n\\nNonetheless, we agree that this is where future work lies, and we are indeed currently exploring different ways of exploiting the presence of a high-level (deep) representation to make the problem of likelihood estimation much easier and its convergence faster. Much more work is needed before these new ideas can be proven right and this paper should instead be judged in comparison with the past published work.\"}", "{\"title\": \"review of Bounding the Test Log-Likelihood of Generative Models\", \"review\": \"This paper proposes an estimator for the log-likelihood of intractable models with latent variables. The approach is simple in that it has no free parameters, doesn\\u2019t require an explicit likelihood, and only needs samples from the model. The approach is most useful for model comparison since the estimate is conservative rather than optimistic.\\n\\nI enjoyed reading this paper. The proposed method is quite novel and elegant and has the potential to be a useful tool for model comparison. One issue is that the estimator seems to require a large number of samples in order to converge, and this potentially exacerbated by increasing model size. As stated in the paper, this is likely to do with the convergence of MCMC within the model itself. One empirical test of this would be to compare the efficiency of the estimator with exact samples vs MCMC in e.g., a small RBM.\\n\\nThe biased CSL is also novel, but seems to be even more optimistic than AIS. The argument of the paper is based on the idea that we would prefer conservative estimates to optimistic estimates for model comparison. When would the authors expect the biased CSL method to be useful in practice? How many steps would be required before biased CSL matches AIS?\\n\\nMinor thoughts and some found typos below.\\n\\n1. In table 1 the AIS and CSL estimates are vastly different. One is optimistic and one is conservative - which one is closer to the truth? Is there a reason that GSN is so much better? Obviously the truth is impossible to determine, but it is clear that more samples are needed before the estimate converges.\\n2. The RBM used in table 2 is quite small, using only 5 hidden units. 20 hidden units is slightly larger but still tractable. It would be good to see how the efficiency of the estimator is affected by model size.\\n3. An RBM trained with PCD is thought to have much better likelihood than an RBM trained with CD. Is this reflected in CSL estimates?\\n\\nformulat -> formulate (section 1)\\ncollecte -> collect (section 2)\\nin -> in (Monte-Carlo estimator in section 4)\\n30 steps -> 300 steps (or the the legend in Figure 1 has a typo, section 6)\\nmode -> model (section 7)\"}", "{\"title\": \"review of Bounding the Test Log-Likelihood of Generative Models\", \"review\": \"In this paper, the authors propose a new way to estimate the probability of data under a probabilistic model from which sampling is hard but for which an efficient Markov chain procedure exists. They first present an asymptotically unbiased estimator, then a more efficient biased estimator.\\n\\nThe idea is undeniably interesting. Some of the most used generative models satisfy these constraints and being able to calculate the probability of data under these models is crucial to comparing them. However, the results presented in this paper are underwhelming. For models where AIS was usable (the DBN, the DBM and the RBM), the CSL results wildly differ from the AIS ones. Since the results on the small RBM (Table 2) give a clear advantage to AIS, I am inclined to believe these results more.\\n\\nAnother caveat, unfortunately extremely difficult to avoid, is that the effectiveness of these methods can only be empirically proven on tiny models where mixing problems do not occur. I really do not blame the authors for that but this really limits the potential impact of the method.\\n\\nExperiment in Figure 1 is also very light to conclude on the effectiveness of Biased CSL. Binary MNIST is a very particular dataset and this experiment does not convince me that it is actually usable to compare models, especially of different types.\", \"conclusion\": \"this paper does not prove the effectiveness of the proposed method. The propositions are not worth publication by themselves.\", \"other_comments\": [\"CSL is only a lower bound on the true log probability of the data in expectation. This should be made clearer in the paper.\", \"The pseudo code should either be commented or removed entirely. As it is, it is only useful to people who already understood the algorithm.\", \"Could you give more details on the parameters for AIS? How many chains? How many intermediate temperatures? How does the computation time compare to CSL?\"]}", "{\"reply\": \"Dear Reviewer (16f7),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'For models where AIS was usable (the DBN, the DBM and the RBM), the CSL results wildly differ from the AIS ones.'\\n\\nThe proposed CSL estimator does not only reflect the model's generative capability but also the MCMC sampler used to collect samples of the latent variables. We believe the higher variance (or more slowly converging) CSL estimates, compared to AIS, are due to the inefficiency, or poor mixing, of Gibbs sampling in well-trained RBMs. However, keep in mind that the use case (and motivation) for CSL was to estimate the likelihood of GSNs, for which AIS is not available and where mixing tends to be much better.\\n\\nAs we have already stated in our responses to the other reviewers' comments, the proposed CSL estimator seems to be the only one that can be used for models that have no explicit formula for computing the probability (either normalized or unnormalized). However, we agree that improving the variance of CSL is an important objective for future work and we are studying options.\\n\\n'Another caveat, unfortunately extremely difficult to avoid, is that the effectiveness of these methods can only be empirically proven on tiny models where mixing problems do not occur'\\n\\nWe agree, and this is a problem in general with any estimator. For instance, even with AIS, any empirical evidence that compares it with the true value can be only given for tiny models. However, we believe this does not and should not discourage research and development of new estimators, especially, considering that some generative models such as GSNs do not have any better alternative at the moment.\\n\\n'Binary MNIST is a very particular dataset and this experiment does not convince me that it is actually usable to compare models, especially of different types'\\n\\nWe agree with you that more experiments with different types of data may support our claim better. In a next version of the paper, the proposed estimators should be tested on other datasets. We have started experiments on the TFD dataset and will be able to add these results to the paper before the conference.\\n\\n'The propositions are not worth publication by themselves.'\\n\\nWe agree that the math in this paper is very simple. However, please consider the following contributions:\\n\\n(1) We improve over a previously available likelihood estimator (Breuleux et al) for models such as GSNs\\n - by sampling over h rather than over x, making CSL more efficient because each h can cover many x's in a way that should be better than a poorly informed kernel density (e.g. centered on a sampled x)\\n - by not requiring a bandwidth hyper-parameter to be tuned (just for the purpose of estimating the likelihood)\\n(2) We study experimentally the properties of this estimator and compare it to exact and AIS estimates.\\n(3) We introduce a biased variant and experiments find it to order models well. \\n\\n'CSL is only a lower bound on the true log probability of the data in expectation. This should be made clearer in the paper.'\\n\\nIndeed. We will make the text more clear in the revision.\\n\\n'The pseudo code ... is only useful to people who already understood the algorithm.'\\n\\nWe do not understand what you mean when writing that it is only useful to those who already understood the algorithm. We would appreciate if you could further explain the problem with the presented algorithm. We will then make changes accordingly.\\n\\n'Could you give more details on the parameters for AIS? How many chains? How many intermediate temperatures? How does the computation time compare to CSL?'\\n\\nWe used 100 independent AIS runs with 30,000 chains each. The chains were unevenly distributed between the inverse temperature 0 (independent variables) and 1 (true model distribution) such that there were 10k chains between 0 and 0.5, another 10k chains between 0.5 and 0.9, and the remaining 10k chains between 0.9 and 1. Hence, the computation required for AIS is roughly equivalent to computing the CSL estimates with 1.5 million samples, considering that the CSL need to compute the conditional probability for a test sample. For instance, the time taken by the AIS estimator is somewhere between those taken by the CSL with 10k and 50k samples in Table 1.\"}" ] }
_wzZwKpTDF_9C
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
[ "Andrew Saxe", "James L. McClelland", "Surya Ganguli" ]
Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed remains finite: for a special class of initial conditions on the weights, very deep networks incur only a finite delay in learning speed relative to shallow networks. We further show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, thereby providing analytical insight into the success of unsupervised pretraining in deep supervised learning tasks.
[ "nonlinear dynamics", "initial conditions", "dynamics", "deep learning", "networks", "weights", "special class", "unsupervised pretraining", "exact solutions" ]
submitted, no decision
https://openreview.net/pdf?id=_wzZwKpTDF_9C
https://openreview.net/forum?id=_wzZwKpTDF_9C
ICLR.cc/2014/conference
2014
{ "note_id": [ "e4N34lBE3RWox", "dDibL51ebyDSr", "eeQ8Ai9IvaA-o", "Du1surK3sxC6t", "TTDzYRnZ-UTo_", "IIFDI_qHXBNmK" ], "note_type": [ "review", "review", "review", "review", "review", "review" ], "note_created": [ 1392137340000, 1392845640000, 1392173880000, 1392167340000, 1390429260000, 1391520480000 ], "note_signatures": [ [ "anonymous reviewer 733d" ], [ "Andrew Saxe" ], [ "anonymous reviewer 9c88" ], [ "Ian Goodfellow" ], [ "anonymous reviewer a044" ], [ "Andrew Saxe" ] ], "structured_content_str": [ "{\"title\": \"review of Exact solutions to the nonlinear dynamics of learning in deep linear neural networks\", \"review\": \"This paper analyzes gradient descent learning in deep networks of linear units. While such networks are trivially equivalent to 1 layer linear transforms, the authors argue that their learning dynamics should act as a tractable analogy for learning in non-linear networks. Under certain simplifying assumptions they are able to produce some analytic expressions for how gradient descent learning acts in the idealized continuous time case.\\n\\nThere is also an attempt to justify pre-training in terms of this analysis that I find not particularly convincing, as here the 'pre-training' amounts to solving the problem almost in the linear case but certainly not in the linear case. \\n\\nA new addition to this paper is a discussion non-linear networks and the role of initialization, which includes some interesting numerical simulations demonstrating that certain choices of scaling constant under a particular 'orthogonal' initialization scheme. I like this part of the paper the best. However, it is not the focus of the paper and should be developed further (perhaps in another paper?).\\n\\nI should say that I reviewed this paper before. It is in many ways improved from the original version. A few problems still remain, and there are also many new ones (page 8 in particular). However, I'm mostly satisfied with it and would recommend acceptance. I'm hoping the authors can answer my various question, as some parts of the paper either confused me and possibly contain mistakes.\\n\\n\\nIn terms of the main content, the central issue I have with this paper is that the special initial conditions in which in analysis is done are essentially a partial solution to an SVD problem, which if it were solved fully, would give the optimal weights of the linear network in one step. In particular, the assumptions used require that the weight matrices share singular vectors in a way that basically turns the optimization across all layers into a decoupled set of scalar optimizations. In non-linear networks, where the optimal solution isn't given by an SVD, it is harder to believe that this kind of partial SVD would help as much as it does in the linear case (or perhaps at all).\\n\\nI appreciate the attempt to make the analysis more rigorous in terms of 'learning speed'. Although perhaps a better way to work out a realistic lambda would be to bound the error one gets by taking the continuous time approximation of the original difference equations and choose lambda based on when this error can be argued to have a negligible effect. A lambda which has been chosen via some kind of curvature criterion will generally give a more stable iteration, but there is no guarantee that this will imply a reasonable correspondence between the discrete and continuous time versions, where mere stability isn't enough.\\n\\n\\n\\n\\n\\nAbs/Intro: 'edge of chaos'? This is only explained late into the paper, and sounds bizarre without any context.\", \"page_3\": \"'column of W^{21}' Should this have a bar over it?\", \"page_4\": \"When you say that the fixed point structure of gradient descent learning was worked out in [19], you don't mention on what kinds of networks this analysis was done.\", \"page_5\": \"There are formatting issues at the top of the page. Some text is cut off.\", \"page_6\": \"The word 'caveat' (used twice) doesn't seem appropriate here. I usually don't think of special cases of definitions, or simplifying assumptions, as 'caveats'.\", \"page_7\": \"You should mention this is a classification task and also that the details are in appendix C. The way it is written now it sounds like only the details of the choice of learning rate are in the appendix.\", \"page_8\": \"The discussion about what pre-training should do didn't make sense to me in multiple places.\\n\\nFor example, previously you said that you were assuming Sigma^11 = I, but I guess this is no longer the case. You then say that the product of the weights converges to Sigma^31(Sigma^31)^-1. But how can this be true unless N2 >= N1,N3? When N2 < N1,N3, the product of the weight matrices has rank at most N2, while Sigma^31(Sigma^31)^-1 will be full-rank in general.\\n\\nYou defined a^alpha to be a vector before, but now it is a 'strength'?\\n\\nWhen you consider taking W21 = R2Q^T where R2 is 'an arbitrary orthogonal matrix', what does this actually mean? Orthogonal matrices are square, and so is Q, so does that make W21 square (i.e. N1 = N2)? And if R2 is truly arbitrary, then it is meaningless, as any orthogonal matrix M can be written as R2 Q^T for some R2 (take R2 = MQ).\\n\\nWhen you say 'Now consider fine-tuning on...' are you now taking the input and output not to be equal. This is confusing especially since this sentence appears in a paragraph which starts by saying that we are now talking about auto-encoding with input = output.\\n\\nIn what sense is 'W^21 = R2D1V11' a 'task'? I suspect you meant something else here but this sentence is very awkwardly phrased.\\n\\nI gave up trying to understand what this part was actually trying to say.\", \"page_9\": \"How can you have the weight matrices be random orthogonal matrices? Orthogonal matrices are square and the weight matrices don't have to be.\", \"page_11\": \"Merely scaling g very large won't be any good. The 'activity' will propagate, yes, but the units will completely saturate.\", \"page_12\": \"The analysis done in section seems interesting. It does sound similar to work done on Echo State Networks (ESNs) and initializations. Have you compared to that? Sutskever et al. (2013) also report finding empirically that there was a critical range of scale parameters around which learning seemed to work best. These scale parameters were also applied to weight matrices which had their spectral radius initially normalized to 1, which is similar again to the ESN papers.\\n\\nWhat is missing for me from this section is an application of these ideas to real networks. Do these ideas improve on optimization performance? A good comparison would be against a *properly implemented* Glorot-style initialization (see my above comments re. this), and/or something like the sparse initialization used in the HF papers.\"}", "{\"review\": \"Thank you for the careful reviews! We've incorporated the suggestions made into a new revised draft, which will appear on arxiv soon (by Thu Feb 20th ). We've split up our responses below by reviewer.\", \"all_reviewers\": \"\", \"formatting_issues\": \"Thank you for pointing these out, we have corrected these issues in the revised draft.\\n\\nLength/lack of space: We realize this paper is a little long. We tried to make our presentation as clear as possible in lieu of adhering strictly to the page limit, since the ICLR call for papers said it would not be too strict about space. However, if it is necessary to get it down to size it will be straightforward to do so by judiciously moving a bunch of information to the supplementary material. The fact that the paper will have large amounts of supplementary material should not on the other hand be interpreted as a weakness - but rather as a strength. Moreover, if the paper is chosen as one of the ones to be published in JMLR, the supplementary material can be brought back in to make a substantive longer paper we feel.\", \"reviewer_a044\": \"-'It should be noted that the authors' learning rule is not the batch version of the standard backpropagation algorithm'\\n\\nWe note, as mentioned in our earlier response, that we do in fact treat standard backpropagation. Our equations may look unfamiliar but they are simply a rearrangement of the terms in standard backpropagation (and the rearrangement uses the linearity of the network).\", \"reviewer_733d\": \"-Special initial conditions: \\u201cThe central issue I have with this paper is that the special initial conditions in which in analysis is done are essentially a partial solution to an SVD problem, which if it were solved fully, would give the optimal weights of the linear network in one step. In particular, the assumptions used require that the weight matrices share singular vectors in a way that basically turns the optimization across all layers into a decoupled set of scalar optimizations. In non-linear networks, where the optimal solution isn't given by an SVD, it is harder to believe that this kind of partial SVD would help as much as it does in the linear case (or perhaps at all).\\n\\nOur main goal is to understand the dynamics of learning in deep linear networks. In linear networks, the SVD is crucial to the solution--and hence it should not be surprising that it plays a critical role in understanding the dynamics as well. Finding a change of variables in which the dynamics decouple is a central contribution of this paper. We also note that the dynamics starting from small random weights are very well described by our solutions (see our Fig. 3)--hence our special initial conditions are not too special, in the sense that they behave similarly to starting from small random weights. We certainly agree that the change of variables we use will not necessarily help decouple the dynamics in nonlinear networks when they are operating in their nonlinear regime, but we believe our results shed light on certain behaviors of nonlinear networks for two reasons. First, even nonlinear networks, when they contain nonlinearities such as tanh and are initialized with small random values, may start off in a roughly linear regime that is well described by our results (see our Fig. 3 for an empirical example). Thus early in training, nonlinear networks may behave similarly, though the quality of the linear approximation is likely to break down as training proceeds and the network enters its nonlinear regime. Second, our deep linear networks exhibit a number of features seen in nonlinear nets, such as long plateaus during training with relatively little performance improvement followed by rapid learning; a slowing of training speeds with depth; and faster convergence from pretrained initial conditions than from small random initial conditions. Fully understanding these phenomena in the linear case is an important goal in its own right, and may lead to further avenues of analysis for certain nonlinearities (such as piecewise linear activation functions like rectified linear units).\\n\\nFurthermore, in the newer version of the manuscript, we note that we have analyzed not only pre-trained initial conditions on weights, but also a new class of random orthogonal initial conditions on weights, and shown that they both yield depth independent learning times, in contrast to random Gaussian (Glorot-style) initialization. Thus our newer version yields an initialization method (random orthogonal) that does not involve solving any part of the optimization problem (not that pre-training either involves solving the supervised learning problem). \\n\\n-Learning speed analysis in continuous and discrete time\\nWe've been thinking about your suggestion to bound the error one gets by taking a continuous time approximation and choosing lambda based on when this error can be argued to be negligible. At the moment, we see no clear criterion to use to argue that error will be negligible. Our curvature condition, by contrast, has been widely used in theoretical analyses in the field, and our empirical experiments back up the results we obtain using this method. For this reason we have confidence that our analysis gives an accurate picture of learning speeds in deep linear networks. In particular, the main qualitative result of the analytic theory that pre-trained and random orthogonal initialization, but not random Gaussian, achieve depth independent training times was confirmed by numerical simulations of gradient descent in discrete time. This gives us confidence that our analytic methods describe well what happens in discrete time.\\n\\n-'edge of chaos': Thank you, we have edited the paper to introduce this term before using it, since it may be unfamiliar. We give an intuitive description of it in the introduction.\\n\\n-'the input-output correlation matrix contains all of the information about the dataset used in learning' is indeed only true for linear networks. We do not feel it is necessary to reiterate our use of linear networks here, given that this restriction is clearly stated in our title, abstract, introduction, first sentence of the section, and all equations and derivations leading up to this point in the text.\\n\\n-Page 3: 'column of W^{21}' This should have a bar over it, thank you. We have corrected this in our new draft.\\n\\n-Page 4: 'What kinds of networks did ref [19] analyze?' Thank you for the comment, we have edited the draft to indicate that the analysis was for linear neural networks. ([19] presents an analysis of three layer networks).\\n\\n-Page 6: 'caveat' We've changed the wording in the draft.\\n\\n-Page 7: Clarification of classification task. We've changed the wording to make it clear that details of the task, not just the learning rate, can be found in appendix C.\\n\\n-Page 8: 'Optimization performance is only improved in the short term and maybe medium terms by pre-training vs standard carefully scaled inits. In the longer term (which is what really matters in the end, as it dominates most of the run-time) the difference is less significant, if it is present at all.' \\n\\nWhile this may indeed be a feature of nonlinear neural network learning, our goal in this paper is to understand the linear case. We feel this is an important prerequisite for understanding more complex nonlinear networks (we agree that it is hard to imagine saying much about arbitrary nonlinearities, but rectified linear units and other recent piecewise linear activation functions might be amenable to future analysis building on our own). Your comment raises an important point that we have included in our revised draft: Our analysis is likely, to the extent that it does also illuminate the behavior of nonlinear networks, to be most accurate in the early epochs of learning when the nonlinear network is operating in a roughly linear regime (assuming it starts from small weights, with a nonlinearity that is roughly linear around the origin such as tanh). We make no claims that this initial period represents the bottleneck in training deep nonlinear networks. Our goal is to study the linear case, and show which pieces of nonlinear behavior can be explained in this way. We have edited the draft to make it clear that our results, depending on details of initializations and nonlinearities, might describe the early portion of training a nonlinear network; but cannot be expected to apply to later portions of training, when the nonlinear network is in a more nonlinear regime. We also note that linear networks serve as a sort of lower bound on training times. Thus the fact that very deep linear networks, such as the 100 layer networks we train, can require 150 epochs to reach the error obtained in just a few epochs by a pretrained net, means that a nonlinear network of this depth would almost certainly suffer at least as severely, and hence pretraining will provide a speed boost early in training.\\n\\n-'it seems reasonable to suspect that in later stages of optimization that the network may behave in a much less linear fashion as the units move out of their linear regimes, and so the optimization advantages of pre-training will diminish or even disappear' \\nWe definitely agree that as neurons move out of their linear regime our analysis should not be expected to apply. We have edited our draft to emphasize this. We note that, unless random initial conditions somehow speed up late in training relative to pretrained initial conditions (i.e., pretraining actually hurts you once you\\u2019re in the nonlinear regime), the benefits accrued early in training are likely to persist (though they may be small relative to the overall training time). It therefore seems unlikely that the optimization advantage will \\u201cdisappear\\u201d entirely.\\n\\n-'In Appendix D you cite some evidence from other papers. But the results with Hessian-free optimization and other methods are consistent with what is written above (believe me, I know this work very well). Look for example at the results for SGD in Figure 3 of the Chapelle & Erhan paper that you cite. SGD reaches nearly identical KLs on MNIST after 700 dataset passes when you compare random initializations vs RBM pre-trained initializations. The Curves results seem to favor pre-training a small bit with SGD, but the difference is small (maybe 1.5x faster), and would likely become insignificant in the longer term as random-init SGD appears to be catching up before the graph cuts off. My experience, along with that of many other people who have studies these methods is that if you run these kinds of experiments longer, the approximate 1.5-2x speed increase with pre-training wanes significantly over time, eventually reaching near parity with well chosen random initialization methods, so that overall not much time is saved, if any.' \\nBased on the published evidence, we believe we are justified in concluding that pretraining confers an optimization advantage. To take your example of Chapelle & Erhan, Fig. 3, the KL divergence of RBM-pretrained nets on MNIST using SGD reaches about 4.8 by epoch 700. The random initialization takes until approximately epoch 1000 to reach this level (in making these comparisons, note that both the x and y axes differ between subplots). Hence pretraining does confer an optimization advantage in this instance of about 300 epochs. Furthermore, the plot you identified (Fig. 3 MNIST) is the one in which this advantage is least evident; their Fig. 1 shows Hessian free methods converging more quickly on both MNIST and Curves datasets (Curves: pretrained initializations reach a KL divergence of 10 on iteration 10 or so, compared to iteration 120 for random initializations. MNIST: pretrained drops below 10 by iteration 30, while for random none have dropped below 10 by iteration 70). The other dataset used in Fig. 3 (Curves) shows a robust optimization advantage due to pretraining (pretrained KL drops below 2 around iteration 400; random drops below 2 around iteration 1400). Although the current evidence available in the literature does not appear to document it, we certainly agree it is possible that this speed advantage would wane if these experiments were run for a longer time. Our analysis is only for linear networks, and hence cannot be expected to accurately apply to networks in their nonlinear regime. Thus we would expect our analysis to be most relevant early in training. Nevertheless, the empirical results we have cited show a very consistent optimization advantage due to pretraining, even in nonlinear nets, that persists and remains significant to a reasonable length of training time (hundred to a thousand iterations). We also emphasize that the magnitude of the speed increase may not always be large enough in practice to warrant the extra trouble (and time) of pretraining a network. Our goal is not to say that pretraining should always be used by practitioners, but rather, to understand why it confers an optimization advantage.\\n Also, we now give a random (orthogonal) initialization scheme that does as well as pre-training. We are fundamentally not interested in advocating one initialization scheme over another; we just want to gain an understanding of *why* an initialization scheme confers an advantage when it does. \\n\\n-Page 8: 'The discussion about what pre-training should do didn't make sense to me in multiple places.'\\nWe're sorry this section was not clear enough--we've rewritten it completely in the new draft. \\n\\n-'For example, previously you said that you were assuming Sigma^11 = I, but I guess this is no longer the case.'\\nThat is correct. In fact our analytic results for the learning dynamics can be generalized to allow Sigma^11 = VDV^T where V are the right singular values of the input-output correlation matrix. Then each mode still completely decouples. We excluded this result from our main exposition in the interest of brevity and simplicity, but we've now seen that it's necessary to explain the advantage of pretraining. The details of this will be added to the supplementary appendix.\\n\\n\\n-'You then say that the product of the weights converges to Sigma^31(Sigma^31)^-1. But how can this be true unless N2 >= N1,N3? When N2 < N1,N3, the product of the weight matrices has rank at most N2, while Sigma^31(Sigma^31)^-1 will be full-rank in general.'\\n\\nYes this is true, we were taking the full rank case to simplify the presentation and focus on intuitions. Details of the extension to the rank constrained case are very straightforward (take the best rank N2 approximation).\\n\\n\\n-'You defined a^alpha to be a vector before, but now it is a 'strength'?'\\nThank you for catching the typo, we meant 'a', not 'a^alpha'. 'a' is the scalar strength of the mode.\\n\\n\\n-When you consider taking W21 = R2Q^T where R2 is 'an arbitrary orthogonal matrix', what does this actually mean? Orthogonal matrices are square, and so is Q, so does that make W21 square (i.e. N1 = N2)? And if R2 is truly arbitrary, then it is meaningless, as any orthogonal matrix M can be written as R2 Q^T for some R2 (take R2 = MQ). \\n\\nThank you for the comment, we have abused notation a bit here: R2 will not always be orthogonal (since it may not be square) depending on the size of N2 and N1. What we require for a decoupled initialization is that R2^TR2=I. For square R2, this means it must be orthogonal and, as you rightly point out, arbitrary--W21 might converge to any orthogonal matrix. Interestingly, in this case pretraining reduces simply to establishing the orthogonal initial conditions that we discuss in the subsequent section. For N2 < N1, R2 is size N2 x N1 and R2^TR2 cannot equal the identity. We require that it equal a diagonal matrix D of size N1 that is only nonzero for N2 elements on the diagonal, where it must be equal to 1. I.e., it must ignore some input directions, and learn the rest with unity scaling. With pretraining, the directions with the most input variance will be learned, and the directions with smaller variance will be the ones ignored. Hence, compared to random orthogonal initializations, pretraining biases the initial solution to input directions with high variance. Thus in this case we obtain a slightly modified consistency condition: input-analyzing singular vectors must match input principal components for the first N2 highest input variance directions, and the set of the N2 largest input variance components must be the same set as the N2 largest singular values of the task. Clearly for this setting, R2 is not arbitrary. Finally for N2 > N1, we do require R2^TR2=I, but this does not mean R2R2^T=I. We can write R2 as AB where A is size N2 x N2 orthogonal, and B is a matrix of size N2 x N1 with C = [R; 0] where R is N1 x N1 orthogonal. We have edited the draft to clarify this.\\n\\n-'When you say 'Now consider fine-tuning on...' are you now taking the input and output not to be equal. This is confusing especially since this sentence appears in a paragraph which starts by saying that we are now talking about auto-encoding with input = output.'\\n\\nYes. In the standard pretraining/finetuning paradigm, one first pretrains a network using an unsupervised method, in our case an autoencoder, and subsequently finetunes on a task of interest, in our case by minimizing squared error on a set of input-output pairs. This is the setting we analyze. We are only talking about autoencoding for the pretraining period.\\n\\n\\n-Page 9: 'What is the error being plotted in the figure?'\\n\\nAll throughout the paper, our linear networks are trained to minimize squared error as defined in the first section. Details of the input and output data are in the supplementary material.\\n\\n\\n-Page 9: 'That initializing a linear network with part of the optimal SVD solution (which if you take the full solution, will instantly give the optimal error) isn't too surprising. It would be much more interesting to see if these initialization schemes work well in the nonlinear case.'\\n\\nWe believe there's a misunderstanding here--the network is not initialized with the full solution. It is initialized with the results of unsupervised pretraining using an autoencoder. This is an unsupervised initialization that clearly cannot know the full solution to the final supervised task. Indeed, the pretrained initial condition can in theory be very different from what is required to learn the task. It will be relevant to the solution to the degree that our consistency condition is satisfied. We hope that our revision of this section, which was not originally clear enough, will have made this evident.\\n\\n-Page 9: \\u201cThe Glorot initialization scheme uses uniform random numbers, and I think there is a factor of 6 somewhere in there. Gaussians with 1/sqrt(N) is definitely not what they end up recommending. Also, they did their calculations for tanh units, although it should probably work for linear ones. In my experience, getting the scale constants and other details right matters a lot in practice!\\u201d\\n\\nThe recommendation of Glorot et al. is an initialization that allows the variance of the gradient to remain constant with depth. To do this, the variance of the distribution each weight is drawn from, var(W), must equal 1/N. Using uniform random numbers, this requires initializing to U[-sqrt(6)/sqrt(2N), sqrt(6)/sqrt(2N)]. For Gaussian random variables, this requires a variance of 1/sqrt(N) as we have used. Glorot et al. makes no claim that uniform is better than Gaussian. In fact, our empirical results in Fig. 6 are for the uniform initialization (the exact initialization used in Glorot et al.), not a Gaussian initialization--we have edited the text to note this fact. We certainly agree that getting scale constants right matters greatly in practice--this is the effect we're analyzing. Using a uniform distribution without the factor of sqrt(6), or using a Gaussian with an extra factor of sqrt(6) will change the results a great deal.\\n\\n-Page 9: How can you have the weight matrices be random orthogonal matrices? Orthogonal matrices are square and the weight matrices don't have to be. \\n\\nThe generalization of orthogonal matrix to the rectangular case is any matrix whose singular values are all 1. Moreover one can maintain perfect dynamical isometry in products of such matrices in a subspace of dimension as large as the number of neurons in the smallest layer. To do so, one need only have the column space of one layer feed into the row space of the next layer, so that no further dimensions of activity get killed. We have added a note to the document. \\n\\n\\n-Page 11: Merely scaling g very large won't be any good. The 'activity' will propagate, yes, but the units will completely saturate. \\n\\nYes we agree, a very large g might accomplish the goal of allowing activity to propagate through a deep network, but might hinder other important goals such as keeping units desaturated. Nowhere do we claim that scaling g very large is optimal, or that activity propagation is the only relevant concern in deep networks. In fact we show that g near 1 guarantees the nice property of dynamical isometry for random scaled orthogonal initializations with the tanh nonlinearity. \\n\\n-Page 12: 'Variance' isn't really what eqn 21 is measuring. That implies deviation from the mean, which doesn't have to be 0 here. The units could all be completely saturating (this is bad), but eqn 21 would still have a non-zero finite value.\", \"we_neglected_to_mention_an_implicit_assumption\": \"if the nonlinearity is odd, and the weights are random orthogonal, then the distribution of activations x_i across neurons i in a layer will be symmetric about the origin, and therefore this distribution will have zero mean. Therefore Eqn 21 is measuring variance. We have added this note to the text.\\n\\n-Page 12: The analysis done in section seems interesting. It does sound similar to work done on Echo State Networks (ESNs) and initializations. Have you compared to that? \\n\\nWe're glad you found the analysis interesting, yes, this is similar to echo state networks among others, which is why we used the term 'edge of chaos, in analogy with terminology for recurrent networks' (p12). In ESNs, however, the results are for a fixed recurrent matrix that is often not altered during learning. More generally, this analysis is essentially the method of mean field theory from statistical physics, which predates the advent of echo-state networks.\\n\\n -What is missing for me from this section is an application of these ideas to real networks. Do these ideas improve on optimization performance? \\n\\nAs you can imagine, we are very interested in this question! However we think that to do a proper, careful comparison is too much for this already long submission.\", \"reviewer_9c88\": \"Thank you for the kind comments, we are very encouraged that you found the analysis insightful!\\n\\n-Ambiguity of 'learning time': We agree, this is an important point. We have changed the text to indicate that we are considering only iterations and not computation time.\\n\\n- p 9: 'mode strength was chosen to be small (u=0.001)' because ... ? \\n\\nA small initial mode strength is an accurate approximation for learning that starts from small random initial conditions on the weights. If a large initial mode strength is used, this corresponds to embedding knowledge about the task before learning starts--in the extreme, if the initial mode strengths are chosen to be the corresponding singular values, then the input-output map is initialized at its optimum and no learning is necessary. Greedy layer wise pretraining on MNIST, because MNIST satisfies the consistency condition, provides an unsupervised way to initialize mode strengths to greater values than can be achieved using small random weights. This correspondingly improves learning speeds. We included this note as an attempt to explain the difference between the two plots--although both start on the decoupled manifold, pretrained solutions start with a larger mode strength and hence learn much more quickly.\\n\\nWe've corrected the typos below in the new draft, thanks for catching them.\\n-'We show (Fig 6A, left, red curve)' -> 'We empirically show (\\u2026'\\n- Figure 6: 'A Histograms of the singular values' should be 'B Histograms of the singular values':\\n- p 11: 'netoworks' -> 'networks' \\n- Discussion: 'propery' -> 'property' \\n\\n- Section 4: 'and phi(x) is any nonlinearity that saturates as ...' Any nonlinearity, really? This doesn't sound right, since any nonlinearity could incorporate some scaling that could override the role of gain factor g. You should more specifically characterize the properties of the nonlinearity, and state here which one(s) you will actually consider. \\n\\nAt the level of generality that we first introduce phi, it can indeed be any nonlinearity. We say in general that there exists a g_c such that if the gain g < g_c activity decays, and if g > g_c activity does not decay (and does not blow up due to saturation). g_c of course depends on the nature of the nonlinearity (and could be 0 or infinity). We have not specified how in general it depends on the nonlinearity (one has to solve the fixed point equation in the appendix). When the nonlinearity is tanh (or any monotonically increasing nonlinearity with slope 1 at the origin), g_c = 1. \\n\\n-Appendix D.2: 'An analysis of this effect in deep linear networks' I believe you meant to write 'in deep nonlinear networks' since linear networks would all converge to equivalent solutions (at different paces) and thus could not exhibit any difference in generalization.\\n\\nWe did actually mean deep linear networks--by 'this effect' we meant the generalization advantage due to pretraining, which we believe can exist in deep linear networks, though it may require reformulating the learning problem. The ref [20] we cite provides an example of how generalization performance can be analyzed in deep linear networks (there a three layer network), and our suggestion is that it may be possible to build on this work to analyze the case of using unlabeled data to do unsupervised pretraining, followed by supervised fine tuning. Their analysis uses overrealizeable input-output maps and adds measurement noise to the training data. Whether analysis based on these assumptions would shed light on what happens in typical deep nonlinear networks is unclear to us, but it seems worth pursuing. We agree that, if training is run to completion, all linear nets would converge to the same solution, and thus some early stopping criterion might be necessary. Working through these issues is an interesting goal for future work. Interestingly - our work reveals analytically the effects of early stopping - it kills the components of the small singular values/vectors of the input-output correlation matrix in the product of weights across layers.\"}", "{\"title\": \"review of Exact solutions to the nonlinear dynamics of learning in deep linear neural networks\", \"review\": [\"Brief summary of the paper:\", \"This work is mostly a mathematical analysis of the dynamics of gradient descent learning in *linear* multi-layer networks, which surprisingly displays a complex non-linear dynamic. It delivers analytic results and novel insights regarding learning speed, how it is affected by depth, and regarding optimal initialization regimes, drawing connections to the benefits of pre-training strategies. In particular it shows that orthogonal initialization is a better strategy than the appropriately scaled random Gaussian initialization proposed in Glorot & Bengio AISTATS 2010. Mathematical results are empirically validated with linear networks trained on MNIST. Some empirical results using non-linear tanh networks are also provided, convincingly showing that some of the insights gained from the analysis of linear networks can apply to the non-linear case.\", \"Quality assessment:\", \"This is a very well written paper. I believe it offers a novel, thorough, and enlightening analysis of the dynamics of learning in linear networks. This is a worthy enterprise, and succeeds in uncovering novel insights that will likely also have an impact on practical approaches to training non-linear deep networks. It also paves the way to a more formal analysis of the dynamics of learning in non-linear deep nets.\", \"I found the analysis of initialization strategies in sections 3 and 4 particularly interesting due to their practical relevance.\", \"My only concern, since this paper literally offers a lot, has to do with the ICLR policy on page limits (and text squeezing), that I am unsure about. To be checked.\", \"Pros and Cons:\", \"Thorough mathematical analysis, conveying valuable novel insights on the dynamics of learning in deep linear networks\", \"Evidence that these insights can have practical relevance for improving approaches to training deep non-linear networks\", \"TODO: Check ICLR policy on page limits??\", \"Detailed suggestions for improvements:\", \"Top of Figure 3 hides part of the above paragraph (a part that seemed important for understanding!)\", \"Bottom of caption of figure 3 too close to main text (almost overlaps)\", \"I find the term 'learning time' ambiguous or ill-chosen, as you do not take into account the fact that deeper networks usually involve more computation per epoch. I suggest you clearlry define it as meaning number of training iterations (epochs) upon first introducing it.\", \"Top of page 5: there is an ill-placed newline between 'logarithmically spaced' and the rest of the sentence.\", \"p 9:\", \"'mode strength was chosen to be small (u=0.001)' ... because ... ?\", \"'We show (Fig 6A, left, red curve)' -> 'We empirically show (...'\", \"Figure 6: 'A Histograms of the singular values' should be 'B Histograms of the singular values'\", \"p 11:\", \"'netoworks' -> 'networks'\", \"Section 4:\", \"'and phi(x) is any nonlinearity that saturates as ...' Any nonlinearity, really? This doesn't sound right, since any nonlinearity could incorporate some scaling that could override the role of gain factor g. You should more specifically characterize the properties of the nonlinearity, and state here which one(s) you will actually consider.\", \"Discussion:\", \"'propery' -> 'property'\", \"Appendix D.2:\", \"'An analysis of this effect in deep linear networks' I believe you meant to write 'in deep nonlinear networks' since linear networks would all converge to equivalent solutions (at different paces) and thus could not exhibit any difference in generalization.\"]}", "{\"review\": \"I'm not an official reviewer of this paper but I want to state for the record that I think it should be accepted. In the interest of full disclosure, I should make it clear that I'm a former co-author of the the lead author of this paper, but I don't think that has compromised my objectivity in this case. I also reviewed an earlier version of this work for NIPS 2013 without knowledge of its authorship and I also recommended that it be accepted then.\\n\\nI don't intend for this comment to be a complete review, but I do want to argue against one of the main criticisms of this paper that I've seen made by the official reviewers, which is that deep linear models are uninteresting and that the algorithms studied are not used in practice.\\n\\nLinear networks formed by composing matrix multiplies are an interesting model class in my opinion. They allow us to study some of the properties of deep networks without needing to commit to analyzing the most difficult cases.\\n\\nOther papers have previously been published at leading conferences about using deep linear models to study deep learning. See for example this ICML 2012 paper from Geoffrey Hinton's group:\", \"http\": \"//www.cs.toronto.edu/~tang/papers/deep_mfa.pdf\\nThis paper chooses to study deep mixtures of factor analyzers (DMFAs) 'even though a DMFA can be converted to an equivalent shallow MFA by multiplying together the factor loading matrices at different levels'.\\n\\nWhile it is true that the state of the art deep nets are not linear, so technically the exact method studied here is not used in practice, I think that the current state of the art deep nets are similar enough for this case to be interesting. For a wide variety of problems, rectified linear hidden units feeding into a linear classifier such as a softmax or SVM are the state of the art. I myself have also had a lot of success with my 'maxout' units, which are also locally linear. So quite a lot of the current neural net literature deals with networks whose functions are linear on domains of non-negligible size.\\n\\nIf you train a deep rectifier net and monitor how many hidden units change which linear piece they are are for each gradient step, this 'piece change rate' is quite small. I ran this experiment on a state of the art feedforward net on MNIST (trained with dropout and rectified linear units). I found that by the end of the first epoch, the piece change rate is less than 1%, and by the end of training it has fallen to less than 0.04%. This suggests that on short timescales it is actually quite reasonable to model training of state of the art deep nets as being linear. I suspect the fidelity of such a model breaks down over longer time scales, but if our model of short term learning dynamics allows us to improve the short term behavior of our learning algorithms, I suspect improving the short term behavior at every time step will result in an improvement in the long term behavior, even if we are not able to specifically predict that behavior accurately.\\n\\nFor these reasons, I suspect that insights gained on the model family that this paper studies will lead to improvements in training of non-linear deep nets.\"}", "{\"title\": \"review of Exact solutions to the nonlinear dynamics of learning in deep linear neural networks\", \"review\": \"In this paper, the authors try to analyze theoretically the dynamics of learning in deep neural networks. They consider the quite restricted case of deep linear neural networks. Such linear deep networks are not of practical interest, because the mapping y = Wx between the input vector x and output vector y in them can always be realized using a single weight matrix W which is the product of weight matrices of different layers, and therefore adding hidden layers does not improve their performance.\\n\\nAs the performance criterion the authors use the standard mean-square error between the output vectors of the networks and their target values (desired responses), which is in practice approximated by the respective squared error over all the training pairs. The gradient descent method applied to this criterion provides the batch learning rule (1) used in the paper. It should be noted that the authors' learning rule is not the batch version of the standard backpropagation algorithm. This is because already in the case of a single hidden layer it depends on the two weight matrices between the input and hidden layer, and between the hidden and output layer.\\n\\nDespite the linearity of the studied deep networks, this learning rule has nonlinear dynamics on weights that change with the addition of each new hidden layer. The authors show by simplifying further their analyses that such deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initialization.\\n\\nIt is good that the authors know and refer to the paper by Baldi and Hornik from the year 1989, reference [19]. In this paper Baldi and Hornik showed that in a linear feedforward (multilayer perceptron) network with one hidden layer the optimal solution minimizing the mean-square error is given by PCA, more accurately by the subspace spanned by the principal eigenvectors of the correlation matrix of the input data, coincides with the data covariance matrix for zero mean data. However, Baldi and Hornik did not analyze the learning dynamics.\\n\\nThe paper suffers from lack of space. It extends to 10th page and with supplementary material to 15th page. Obviously because of space limitations figures are too small and there is no Section 1. Introduction, only plain text after the abstract.\", \"pros_and_cons\": [\"--------------\", \"The analysis on learning dynamics is novel and interesting. I am not aware of this kind of analyses on deep networks before this paper.\", \"With their mathematical analyses, the authors are able to explain some phenomena observed experimentally in learning deep networks.\", \"The deep network analyzed is linear, with no practical interest.\", \"The learning algorithm(s) that the authors study are not used in practice.\"]}", "{\"review\": \"Thanks for the review, we have uploaded a new draft with expanded material on a new property, called dynamic isometry, that allows gradients to propagate from input to output in a deep linear net. We have added a comparison to and analysis of other random initialization schemes in both linear and nonlinear nets.\\n\\nAlso we'd like to point out that we do in fact study standard backpropagation. In the linear case, backpropagation gives exactly the equations used in the paper. Our form is simply a rearrangement of terms from what one usually sees (and the rearrangement relies on the linearity of the network). For example, the hidden unit activity h in standard back propagation can be written as h=W_1x in the linear case. The error is delta=(y-hat y), and in the linear case, hat y=W^{32}W^{21}x. Hence the change in W^{32} = delta h^T =(y-W^{32}W^{21}x)x^TW^{21}=(yx^T-W^{32}W^{21}xx^T)W^{21} which is our expression. A similar argument will yield our expression for the change in W^{21}. See our citation [16] for a similar derivation.\"}" ] }
AY3Hz-ujSEXUl
Deep Belief Networks for Image Denoising
[ "Mohammad Ali Keyvanrad", "mohammad pezeshki", "Mohammad Mehdi Homayounpour" ]
Deep Belief Networks which are hierarchical generative models are effective tools for feature representation and extraction. Furthermore, DBNs can be used in numerous aspects of Machine Learning such as image denoising. In this paper, we propose a novel method for image denoising which relies on the DBNs' ability in feature representation. This work is based upon learning of the noise behavior. Generally, features which are extracted using DBNs are presented as the values of the last layer nodes. We train a DBN a way that the network totally distinguishes between nodes presenting noise and nodes presenting image content in the last later of DBN, i.e. the nodes in the last layer of trained DBN are divided into two distinct groups of nodes. After detecting the nodes which are presenting the noise, we are able to make the noise nodes inactive and reconstruct a noiseless image. In section 4 we explore the results of applying this method on the MNIST dataset of handwritten digits which is corrupted with additive white Gaussian noise (AWGN). A reduction of 65.9% in average mean square error (MSE) was achieved when the proposed method was used for the reconstruction of the noisy images.
[ "nodes", "deep belief networks", "image", "dbns", "feature representation", "dbn", "noise", "hierarchical generative models", "effective tools", "extraction" ]
submitted, no decision
https://openreview.net/pdf?id=AY3Hz-ujSEXUl
https://openreview.net/forum?id=AY3Hz-ujSEXUl
ICLR.cc/2014/conference
2014
{ "note_id": [ "fn_Dn2Z1DYB65", "x1X3ZIGuLvwtO", "h4624nkGACt8T" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1390784040000, 1390695660000, 1391486160000 ], "note_signatures": [ [ "anonymous reviewer 4d9c" ], [ "anonymous reviewer a9cd" ], [ "anonymous reviewer f7a0" ] ], "structured_content_str": [ "{\"title\": \"review of Deep Belief Networks for Image Denoising\", \"review\": \"This paper presents an approach for image denoising, based on deep belief networks (DBN). The idea is to train a DBN on a training set consisting of both noising and non-noisy images. Then, the activity of top-hidden-layer units is compared between the noisy and non-noisy images, in order to identify units which are mostly involved in the modelling of noisy images. Denoising is then performed by inputing a noisy image, inferring the value of the top hidden units, fixing the 'noise' hidden units to its neutral value (i.e. its average value on the clean images) and then regenerating the input image. Experiments show that this approach has some success in performing denoising.\\n\\nThe main weakness of this paper is that no comparisons are made with other good denoising baselines. I would have at least expected a comparison with the denoising autoencoder work cited in this paper [6]. Also, denoising experiments on MNIST are not particularly compelling and too simplistic.\", \"pros\": [\"The presented idea is simple.\", \"Results seem OK.\"], \"cons\": [\"The results are too preliminary, as no comparisons are made with a good denoising baseline (including the deep learning work on denoising, cited in this paper).\", \"Writing could be improved.\", \"Other comment\", \"How is denoising performed in the 'reconstruction without eliminating any node'? Specifically, is this network trained on both noisy and non-noisy images, or only on clean images?\"]}", "{\"title\": \"review of Deep Belief Networks for Image Denoising\", \"review\": \"This paper presents a simple method for denoising noisy mnist digits with a deep belief network. The method looks at the relative activities of the hidden units when the input is a normal vs a noisy image. After obtaining these statistics, at test time, noisy nodes or nodes which are affected by noise are removed, leading to better reconstructions.\", \"the_authors_are_recommended_to_reference_the_paper\": \"Deep Networks for robust visual recognition, Tang&Eliasmith icml 2010, where the tasks are similar but with a more elaborate algorithm and experimental results.\\n\\nWhile the task is interesting and potentially very important, the method proposed in this paper is extremely simple and are not shown to work for difficult noise/occlusion cases. For example, simple reconstruction with DBN is already very good for the mnist digits.\\n\\nPossible improvements include trying more difficult noise and occlusions; look at how denoising can help reduce recognition error; and coming up with a more principled way of determining which nodes are affected. Since a DBN is a distributed network, it is likely that all hidden nodes would be affected somewhat, and each image would lead to a different activation for a particular hidden node, simply by looking at relative activations seems very ad hoc. There are several papers related to denoising using DBNs/DBMs that can be found with simple google search. The authors should compare/contrast, cite and test on similar experiments with those other papers.\"}", "{\"title\": \"review of Deep Belief Networks for Image Denoising\", \"review\": \"This work presents a method for denoising images using a DBN by identifying feature nodes that are associated with noise. This is done by measuring the mean differences in activations when presented with noisy versus corresponding ground-truth clean images in the training set. Test images are denoised by performing inference, reseting the 'noise' feature nodes to their average values across the clean images, and reconstructing from the resulting representation. The method is tested on MNIST with additive Gaussian noise.\\n\\nThe method is simple and appealing; however, it is evaluated only on one very limited test case, and is under-analyzed. How does this method perform for other types of input or noise? Also, although the authors review some prior work on the subject, they do not explicitly compare their method against any other algorithm.\\n\\nA larger question I have is whether it is necessary to require clean/noisy inputs be associated in pairs, or if this association could be weakened or removed. This would be a major advantage of this method if it were the case. That is, is it enough to have a pool of known clean images and a pool of known noisy images, with no elementwise correspondence between the two? If the clean data underlying the noisy data generation is the same between these two populations, then the difference in mean activations should be unchanged. But it seems to me that these means might also not change much if the two sets of underlying clean images are distinct but from the same general population, e.g. if half of training images are clean, and the other half is used to generate noisy ones.\\n\\nThe paper is pretty clearly written, though I think the overview of RBMs and DBNs takes up too much space (2 pages); this seems it could be condensed, and replaced with more experiments and details on the method presented. I also would have liked to see more illustrating the method's internals. Why was the threshold of 0.9 chosen for identifying a node as a 'noise' feature, for example? Some plots/histograms of the activations and 'relative activity' measurement could have been useful here.\", \"pros\": [\"Natural and simple method with limited demonstrated effectiveness.\"], \"cons\": [\"Applied to only one limited setting\", \"No comparisons to other methods\", \"Could have more measurements to illustrate the method's internals\"]}" ] }
L80PLIixPIXTH
How to Construct Deep Recurrent Neural Networks
[ "Razvan Pascanu", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ]
In this paper, we propose a novel way to extend a recurrent neural network (RNN) to a deep RNN. We start by arguing that the concept of the depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, we define three points which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. This can be considered in addition to stacking multiple recurrent layers proposed earlier by Schmidhuber (1992). Based on this observation, we propose two novel architectures of a deep RNN and provide an alternative interpretation of these deep RNN's using a novel framework based on neural operators. The proposed deep RNN's are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNN's benefit from the depth and outperform the conventional, shallow RNN.
[ "deep rnn", "rnn", "depth", "function", "novel way", "recurrent neural network", "concept", "clear", "feedforward neural networks" ]
submitted, no decision
https://openreview.net/pdf?id=L80PLIixPIXTH
https://openreview.net/forum?id=L80PLIixPIXTH
ICLR.cc/2014/conference
2014
{ "note_id": [ "00EQUJ_ix30nA", "aZX4ZRSflnZoq", "oz9eoHf8xCFu1", "M_-3MlaAkti8X", "RJarRzwV0bJq4", "ysYmCPhSlgn4_", "FF_2O4ik2IFc1", "skPGs-hR1dkQN", "tPUcVC29cHtdk", "7FGDPbn3yeFBx", "3bFcksTR-gbyq", "-Hns7xQADuKaP", "CQS2C_yz2aQof", "M-KW-SxV2Q5Nd", "QEaoQyvnoZJq-", "Jv_Eir6jMQJ9D" ], "note_type": [ "comment", "comment", "review", "review", "review", "review", "comment", "review", "comment", "comment", "review", "review", "review", "review", "comment", "review" ], "note_created": [ 1391738340000, 1392710040000, 1391876880000, 1391640600000, 1391052720000, 1394815440000, 1391934420000, 1391723820000, 1391899080000, 1388926740000, 1391052720000, 1388844480000, 1389083160000, 1392233760000, 1391823840000, 1392727380000 ], "note_signatures": [ [ "KyungHyun Cho" ], [ "KyungHyun Cho" ], [ "anonymous reviewer 1f07" ], [ "anonymous reviewer 69d2" ], [ "KyungHyun Cho" ], [ "Michiel hermans" ], [ "KyungHyun Cho" ], [ "anonymous reviewer bfb0" ], [ "KyungHyun Cho" ], [ "KyungHyun Cho" ], [ "KyungHyun Cho" ], [ "David Krueger" ], [ "KyungHyun Cho" ], [ "Justin Bayer" ], [ "KyungHyun Cho" ], [ "KyungHyun Cho" ] ], "structured_content_str": [ "{\"reply\": \"Dear Reviewer (69d2),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'Interestingly, the effect of using dropout, maxout and other generic neural network tricks seems to be much greater than the effect of changing the architecture.'\\n\\nIt is not trivial to apply some of the successful tricks/methods from the recent research on feedforward neural networks. For instance, rectifiers and maxout which have helped convolutional neural networks achieve impressive recognition performance on computer vision tasks are not suitable to be used for hidden states of conventional shallow recurrent neural networks, due to their non-saturating property. We expect that our approach of having a deeper hidden-to-hidden transition will let many of those recent advances to be explored more actively in the context of recurrent neural networks.\\n\\n'the standard shallow RNN with a small hidden layer appears to perform much better than similar networks elsewhere in the literature, e.g. by Tomas Mikolov'\\n\\nAs we have described in the paper, we have done an extensive hyperparameter search and also used some algorithms not used by Tomas Mikolov (e.g., weight noise). \\nThese procedures were equally used with deep RNNs, and the purpose of this extensive search was to ensure that the gain deep RNNs show in our paper is significant and that we do not artificially lower the baseline.\\n\\n'A few more experimental details would help to clarify this.'\\n\\nWe agree with you and will make available the sets of hyperparameters we used soon. Also, the base code for all the experiments is already available at https://github.com/pascanur/GroundHog/.\\n\\n'the paper doesn't really propose 'a novel way' to extend RNNs so much as explore several possible ways'\\n\\nAlthough we find some of the proposed approaches (especially, deep hidden-to-hidden transition) novel, we will consider rephrasing some sentences to make it clear that these proposed approaches are some of many possible ways to build deep RNNs.\\n\\n''modelling variable-length sequences' is a bit too broad to be a 'task'... maybe 'evaluated on two sequence modelling tasks'?'\\n\\nThanks for your suggestion. We will try to rephrase it to be more specific.\\n\\n'The 'In Sec. 2' paragraph is more confusing than helpful'\\n\\nWe wanted to give an outline of the remaining of the paper, but if the paragraph sounds confusing, we will try to rephrase to be more clear and concise.\\n\\n'Eq. (3): do the denominators (1/N and 1/T_n) really belong in the loss? I'd say not... otherwise training would be biased towards shorter sequences and smaller datasets. The dataset factor can be incorporated into the learning rate, but I don't think you want to normalise by sequence length.'\\n\\nYou're correct about this. Thanks for pointing this out. In our experiments, we did not divide by the length of sequence. We will fix the equation.\\n\\n'I found the repetition of the h_t and y_t terms in eqs (4) and (5) confusing'\\n\\nIf you could explain a bit more about what confuses you, we will fix it accordingly.\\n\\n'How many hidden layers did the 'sRNN' have?\\n\\nWe always used 2 levels of recurrent layers for the sRNN's. We will clarify this in the updated version later.\\n\\n''Practical Variational Inference for Neural Networks' would be a better reference for adaptive/fixed weight noise.'\\n\\nThank you for pointing this out. We will update the citation.\"}", "{\"reply\": \"Thank you for your comments, Justin!\\n\\n'I think this paper is important especially because of the new deep transitions.'\\n\\nWe also believe that RNNs will be found to be useful with these deep transitions on many non-trivial sequential data in the future. Thanks for agreeing with us.\\n\\n'the amount of layers information travels through scales faster for a DOT-RNN than for an sRNN.'\\n\\nYou're correct. Although we have only demonstrated empirically the design of deep transition using a single intermediary layer, it is possible to easily extend it to have multiple intermediary layers without much computational burden. This is in contrast to the sRNN, as the states of the intermediary layers do not need to be carried across time. We will try to make it more clear in a later revision.\\n\\n'I applaud the availability of the source code'\\n\\nThank you. We hope our code will help researchers to start working on these kinds of deep RNNs. Especially, as you pointed out, we believe more research into learning algorithms for these models is needed.\\n\\nThanks again!\"}", "{\"title\": \"review of How to Construct Deep Recurrent Neural Networks\", \"review\": \"This paper proposes three techniques to make recurrent neural networks (RNN) 'deeper'. In addition to an inherent deepness over time, the authors propose to consider additional hidden layers for all the transition matrices, i.e. input-to-hidden, hidden-to-hidden and hidden-to-output.\\n\\nTo implement those ideas, neural operators are introduced:\\n - predicting the next hidden state in function of the previous one and the input\\n - predicting the output given the current hidden state\\n\\nthose operators are implemented with multi-layer neural networks, in principle of any arbitrary depth. However, only one-hidden layer operator networks are considered. I think that it would be really interesting to consider even deeper operators. Do we achieve additional improvements ? Is it possible to train these networks with back-prop (through time) ?\\n\\nIn my opinion, you do not consider deep input-to-hidden RNN. In Figure 2b you propagate h_t-1 and x_t through the SAME hidden layer to h_t. I propose to use separate hidden layers for these tasks, i.e. one deep operator to map the input to the hidden state, and an INDEPENDENT deep operator to map from one hidden to the next hidden state. The input-to-hidden operator could be quite deep since you won't have the problem of vanishing gradients over time.\\n\\nYou write that you initialize the sRNN and DOTS-RNN with the weights of a normal RNN to simulate layer-wise pretraining. How do you initialize the weights of the additional hidden layers ? Is this really comparable to layer-wise pre-training of deep MLPs ? Layer-wise pre-training is unsupervised while you need targets. Maybe we should say that you initialize the weights so that the network is easier to train ? Can you elaborate how important this initialization is ?\\nDo the deep RNN still converge when starting from random weights ?\", \"language_modeling\": \"- please add the results of Graves and Mikolov to Table 3\\n - please explain 'dynamic evaluation'\\n\\nIn general, it would be helpful that the authors provide more details on the architectures which do NOT work very well. This would allow the reader to assess the importance of the different tricks, e.g. short cut connection, initialization scheme, etc. By these means, one could more easily reproduce the experiments and apply the same ideas to other tasks.\\n\\nThe authors should discuss the relation to other works in more detail, in particular Pinheiro and Collobert (2013)\\n\\nOverall, this paper presents interesting ideas and opens directions for further research.\"}", "{\"title\": \"review of How to Construct Deep Recurrent Neural Networks\", \"review\": \"The paper explores a variety of ways of making RNNs 'deep'. Some of these techniques, such as stacking multiple RNN hidden layers on top of each other, and adding additional feedforward layers between the input and hidden layers, or between hidden and output, have been explored in the literature before. Others, such as adding extra layers between the hidden to hidden transitions, are more unusual. In its simplest form this would be more or less equivalent to inserting a 'null input' at every second step in the input sequences. But as the authors point out, this would exacerbate vanishing gradients, so they add skip connections too.\\n\\nAs with all neural networks, the space of RNN topologies is vast. This paper doesn't cover a huge amount of new territory, but it does do a good job of exploring a few promising variants. \\n\\nThe experimental results don't suggest a massive benefit from deeper architectures, or a consistent advantage of one over the other, but they do show that adding depth can make a difference. Interestingly, the effect of using dropout, maxout and other generic neural network tricks seems to be much greater than the effect of changing the architecture.\\n\\nThe results on the Penn Treebank data are very good. But it isn't entirely clear why they're so good. In particular, the standard shallow RNN with a small hidden layer appears to perform much better than similar networks elsewhere in the literature, e.g. by Tomas Mikolov. Is this a consequence of the training method, the regularisation, or something else? A few more experimental details would help to clarify this.\", \"small_comments\": \"\", \"abstract\": [\"the paper doesn't really propose 'a novel way' to extend RNNs so much as explore several possible ways.\", \"'modelling variable-length sequences' is a bit too broad to be a 'task'... maybe 'evaluated on two sequence modelling tasks'?\", \"The 'In Sec. 2' paragraph is more confusing than helpful\", \"Eq. (3): do the denominators (1/N and 1/T_n) really belong in the loss? I'd say not... otherwise training would be biased towards shorter sequences and smaller datasets. The dataset factor can be incorporated into the learning rate, but I don't think you want to normalise by sequence length.\", \"I found the repetition of the h_t and y_t terms in eqs (4) and (5) confusing\", \"How many hidden layers did the 'sRNN' have? I couldn't find this.\", \"'Practical Variational Inference for Neural Networks' would be a better reference for adaptive/fixed weight noise.\"]}", "{\"review\": \"We have just submitted a new version (v3) of the paper with some minor revisions and additional experimental results. The new version will be available from arXiv on 31 Jan at 1 am GMT.\"}", "{\"review\": [\"Nice paper. Good to see that the deep RNN thread is being continued. A few thoughts, questions and ideas (I\\u2019m afraid it became a bit more lengthy than initially anticipated):\", \"I only have one true criticism on the content of the paper. The biggest issue I have with the results on the character-wise prediction task (the only task I have a little experience in myself) is that the performance seems to correlate rather strongly with the number of trainable parameters in each network. I fear this might in the end play a more important role than the actual network architecture. What I found when I was working on a highly similar task is that the number of parameters is the single biggest determining factor in model performance, simply because of (much needed) raw storage power. Network architecture itself only provides a modest improvement. I would find the results in this part far more convincing if all architectures are given the same number of trainable parameters.\", \"Concerning deep transitions: the authors rightfully state that deep RNNs can be deep both in their transition and in a more spatial sense. When I was working with deep (stacked) RNNs, in the end the reason I didn\\u2019t try deep transitions (by feeding top-layer input back into the bottom layer at the next time step) was the fact that it slowed down the simulations too much (harder to exploit parallel computing architectures). I wonder if the authors could give some comments on how fast the simulations are for each model.\", \"The discussion at the end of 3.2.4. is interesting. Stacked RNNs could be stated to have a `deep\\u2019 transition, but only from lower to higher layers. This means that the higher layers can represent complex features of a summarised history of the input. In this sense, they should also be able to model complicated state transitions, as each time step new information needs to travel through all the layers. What they are *not* able to do is have deep, hierarchical transitions between the high-level representations themselves. The question then arises whether this is *necessary*, as a high level representation should already consist of a conceptualised representation of the input information, in which properties of the input, including complicated transitions, should already be easier to perform.\", \"After reading your paper, I believe the best way to systematically investigate deep recurrent topologies is as follows: represent each time step as a vertical hierarchy with a fixed number of layers, and only study different way the layers are connected in time (over a state transition). In this way all topologies discussed in the paper, and more, can be represented and studied in a systematic manner. By keeping the number of layers fixed you also eliminate any performance difference caused by the data running through more or less layers. I hope this is more or less clear, as I cannot post a visual representation of what I mean.\"]}", "{\"reply\": \"Dear Reviewer (1f07),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'Do we achieve additional improvements ? Is it possible to train these networks with back-prop (through time) ?'\\n\\nThat is a good point. Our proposed construction does not limit the number of intermediary layers, which should be more thoroughly evaluated with larger, more complex dataset in the future. For the second question, yes, it is possible to train any type of the proposed deep RNNs with BPTT without any modification.\\n\\n'In my opinion, you do not consider deep input-to-hidden RNN.'\\n\\nYou are correct. This is because our focus was more on the temporal aspect of recurrent neural networks and partly because there has already been some work such as (Chen & Deng, 2013) (cited in the paper) showing benefits of having deep input-to-hidden layer.\\n\\n'The input-to-hidden operator could be quite deep since you won't have the problem of vanishing gradients over time.'\\n\\nThank you for your suggestion. We will in the future need to experiment with datasets that exhibit strongly non-temporal structure that may benefit from deep input-to-hidden function.\\n\\n'How do you initialize the weights of the additional hidden layers ? Is this really comparable to layer-wise pre-training of deep MLPs ? Layer-wise pre-training is unsupervised while you need targets. Maybe we should say that you initialize the weights so that the network is easier to train ? Can you elaborate how important this initialization is ?'\\n\\nWe have initialize such that any 'common' parameters between, for instance, DT-RNN and DOT-RNN are pretrained as DT-RNN, however, in a rather supervised fasion. We have mentioned about the potential connection to the layer-wise pretraining, in the sense that the next-step-prediction tasks we considered may also be seen as an unsupervised learning task. \\n\\nIn our experiments, when we used logistic sigmoid functions as the activation functions for intermediary layers, the use of pretraining was important. However, a short experiment using L_p units and dropout showed that the importance of pretraining depends on the model structure/design and may not be mandatory for all deep RNNs.\\n\\n'please add the results of Graves and Mikolov to Table 3'\\n\\nWe will update the paper soon with Graves and Mikolov's results in Table 3.\\n\\n'please explain 'dynamic evaluation''\\n\\nWe have skipped the explanation of the dynamic evaluation as we have not used it during our experiments. However, we agree that it is better to make it clear by explanaining it. We will update the paper soon.\\n\\n'In general, it would be helpful that the authors provide more details on the architectures which do NOT work very well.'\\n\\nThanks for your suggestion. Since we have extensively searched for hyperparameters including nonlinearities as well as structures of deep RNNs, we believe we may be able to take advantage of the validated models to take a glimpse at some of the model choices that did not work as well. We will try to briefly summarize our findings (on model choices with negative results) in the future version of the paper.\\n\\n'The authors should discuss the relation to other works in more detail, in particular Pinheiro and Collobert (2013)'\\n\\nThank you for the suggestion. We will try to draw more detailed connection to other relevant works, including (Pinheiro& Collobert, 2013), in the later versions of the paper.\"}", "{\"title\": \"review of How to Construct Deep Recurrent Neural Networks\", \"review\": [\"Solid paper describing a variety of 'deep' RNN architectures constructed by adding additional layers between input and hidden, hidden and hidden and hidden and output. The authors test their models on two types of data (polyphonic music prediction and word and character-based language modeling) and obtain good results (at least on the LM task). While the arguments they make seem compelling, we feel that the experimental results are not very convincing. In particular, on the music task there are a few problems:\", \"There is no best architecture so there's not much one can learn except to try them all which is not very informative.\", \"To compete with an RNN with fast dropout, the authors add a lot of bells and whistles to their model (L_p units, maxout units, dropout). What happens if you train a standard RNN with the same bells and whistles ? Conversely, which of the 3 techniques made the deep models better than the RNN? A breakdown of the improvements would be helpful.\", \"The authors describe a deep input to hidden function model (3.2.1) however no experimental evidence of its utility is shown.\", \"Adding columns with the number of parameters to Table 1 would help the reader compare the sizes of the different models.\", \"What happens if you train a DO-RNN?\"]}", "{\"reply\": \"We have forgotten to list the references we cited in our response. Here is the list of references:\\n\\nYoshua Bengio, Nicolas Boulanger-Lewandowski and Razvan Pascanu. Advances in Optimizing Recurrent Networks. (2012) arXiv:1212.0901 [cs.LG].\\nBayer, J., Osendorfer, C., Korhammer, D., Chen, N., Urban, S., and van der Smagt, P. (2013). On fast dropout and its applicability to recurrent networks. arXiv:1311.0701 [cs.NE].\\nChen, J. and Deng, L. (2013). A new method for learning deep recurrent neural networks. arXiv:1311.6091 [cs.LG].\"}", "{\"reply\": \"Thanks for the valuable comments! A new version will be available online in a couple of days with all the mistakes you have pointed out fixed and with a revised experiments section.\\n\\nI will post a comment here, once the new version is available on arXiv!\"}", "{\"review\": \"We have just submitted a new version (v3) of the paper with some minor revisions and additional experimental results. The new version will be available from arXiv on 31 Jan at 1 am GMT.\"}", "{\"review\": \"Cool paper!\", \"some_minor_edits\": \"2. 1st sentence: discrete -> discrete-time\\n\\n3.1 2nd sentence: result -> results. I would rephrase as: 'A number of recent theoretical results support this hypothesis'\\n\\n3.2\\nFurthermore, there IS a wealth...\\n\\n'in this paper we only consider feedforward, intermediate layers' - If I'm reading this right, there should be no comma.\\n\\n3.3.1 'approximate f_h instead' In your notation isn't f_h the function that the network performs, not the 'true' transition function you hope to approximate?\\n\\n'We call this RNN with a multilayered transition function a deep transition RNN (DT-RNN).' - this was already defined in 3.2\\n\\n3.3.3 I'd cite Schmidhuber again in the first sentence\\n\\n4. I didn't understand this sentence: 'Note that this is different from (Mikolov et al., 2013a) where the learned embeddings of words were suitable for algebraic operators'\\n\\n5. 'on three different tasks' -> 'on two...' same problem next sentence\"}", "{\"review\": \"The new version (v2) of the paper is now available at arXiv!\"}", "{\"review\": \"The paper proposes ways of extending ordinary RNNs to deep RNNs, where deep operations are used in favor of shallow (=affine with an elementwise nonlinearity) at various stages.\\n\\nI think this paper is important especially because of the new deep transitions. It will help practitioners as well as researchers with designing models for sequential data.\\n\\nDuring my first read I was not sure what the actual benefit of a DOT-RNN is compared to an sRNN with one additional hidden layer--but if one looks closely, the amount of layers information travels through scales faster for a DOT-RNN than for an sRNN. Maybe the authors would like to explicitly state this (maybe they did and I missed it).\\n\\nI also think it is valuable that the optimisation of these models has been identified as one of the major problems, such that the community can focus on trying to improve on this. I applaud the availability of the source code.\\n\\nThe experimental evaluation is solid and the results are excellent.\"}", "{\"reply\": \"Dear Reviewer (bfb0),\\n\\nWe thank you for the thorough and insightful comment. Allow us to respond to some of your comments below.\\n\\n'There is no best architecture so there's not much one can learn except to try them all which is not very informative'\\n\\nWe believe one important lesson from our experiments is that, as was the case with feedforward network, it is beneficial to build a 'deep' recurrent neural network. We expect that the optimal structure of deep RNN depends on dataset, which was apparent from our experiments on both music and language. The choice of deep RNNs among different combinations should consider that each dataset has distinct characteristics (e.g. one data might show highly nonlinear transition, while another data might show simple, linear transition).\\n\\n'What happens if you train a standard RNN with the same bells and whistles ? Conversely, which of the 3 techniques made the deep models better than the RNN?'\\n\\nThis is a good point. Thank you for pointing this out for us. \\n\\nThere have been a few recent research work trying these advanced techniques proposed for deep feedforward neural networks on RNNs. For instance, Bengio et al. (2012) tried using rectifiers for RNNs with leaky integration units, but with only modest success in terms of the performance on the same music datasets (worse than our result). Justin et al. (2014) which is cited in the paper proposed the method of fast dropout for RNNs, but again the result is slightly worse than the best result we obtained with deep RNNs having L_p units trained with (ordinary) dropout.\\n\\nUntil recently, however, not many advances in deep feedforward neural networks have been applied to and tested extensively on recurrent neural networks. We believe in the future it will be important to thoroughly validate these new techniques (e.g., dropout, piece-wise linear units, non-saturating nonlinearities ...) with RNNs. \\n\\n\\n'The authors describe a deep input to hidden function model (3.2.1) however no experimental evidence of its utility is shown'\\n\\nWe focused more on the temporal aspect of recurrent neural networks. There has been already work such as (Chen & Deng, 2013) (which is cited in the paper) showing potential benefits of having a deep input-to-hidden function. \\n\\n'Adding columns with the number of parameters to Table 1 would help the reader compare the sizes of the different models'\\n\\nThanks for the suggestion. We will add the numbers of parameters to Table 1 later when we revise the paper.\\n\\n'What happens if you train a DO-RNN?'\\n\\nWe have not tried using a DO-RNN. We will try some additional experiments with DO-RNN and add them to the paper soon.\"}", "{\"review\": [\"A new version of the paper is available on arXiv.org, which incorporates most of the comments made by the reviewers. In particular, we have made the following changes:\", \"(69d2) In the abstract we say that we 'explore' instead of 'propose a novel way'.\", \"(69d2) Changed from 'modeling variable-length sequences' to 'evaluated on two sequence modeling tasks'.\", \"(69d2) Removed the normalization by the length of a sequence in Eq. (3)\", \"(69d2) Added information on how many levels of recurrent layers we used with sRNN.\", \"(bfb0) Added the number of parameters for each model in Table 1. However, note that the model sizes were selected based on validation errors, meaning larger models also have been tried.\", \"(Bayer) Emphasized the difference between sRNN and DT-RNN. Also, strengthend the intuition behind the reason why the deep\", \"transition is a useful extension to RNN which is orthogonal to the other variants of deep RNNs.\", \"(1f07) Added more details about Pinheiro and Collobert (2013).\", \"(1f07) Added the previous results by Mikolov and Tomas in Table 3.\", \"(1f07) Added a footnote briefly explaining the method of dynamic evaluation.\", \"We would like to thank all the three reviewers for their comments. We are still aiming to improve the paper, particularly by having started more experiments as suggested by the reviewers.\", \"We believe that our paper provides a novel perspective on the depth of RNNs and the empirical evidence showing that this direction of research is worth pursuing. We hope the reviewers find this work up to the quality standard of the conference.\"]}" ] }