forum_id
stringlengths
8
20
forum_title
stringlengths
1
899
forum_authors
sequencelengths
0
174
forum_abstract
stringlengths
0
4.69k
forum_keywords
sequencelengths
0
35
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
20
note_type
stringclasses
6 values
note_created
int64
1,360B
1,737B
note_replyto
stringlengths
4
20
note_readers
sequencelengths
1
8
note_signatures
sequencelengths
1
2
venue
stringclasses
349 values
year
stringclasses
12 values
note_text
stringlengths
10
56.5k
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
iPpSPn9bTwn4Y
review
1,363,291,260,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6f71" ]
ICLR.cc/2013/conference
2013
review: I read the updated version of the paper. I has indeed been improved substantially, and my concerns were addressed. It should clearly be accepted in its current form.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
LuEnLatTnvu1A
comment
1,363,216,800,000
ttBP0QO8pKtvq
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2013/conference
2013
reply: We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf * Regarding the differences between equation (1) and equation (7), it comes from moving from p(z) to the conditional p(y|x). This is emphasized in the text introducing equation (7), explaining in more details how one goes from (1) to (7). * Regarding the final arguments and overall presentation of the arguments in the paper, we have reworked the overall writeup of the paper in a way that you will hopefully find satisfactory.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
aaN5bD_cRqbLk
review
1,363,216,740,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2013/conference
2013
review: We would like to thank all the reviewers for their feedback and insights. We had submitted a new version of the paper (it should appear on arxiv on Thu, 14 Mar 2013 00:00:00 GMT, though it can be retrieved now from http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf We kindly ask the reviewers to look at. The new paper contains drastic changes that we believe will improve the quality of the paper. In a few bullet points the changes are: * The title of the paper was changed to reflect our focus on natural gradient for deep neural networks * The wording and structure of the paper was slightly changed to better reflect the final conclusions * We improved notation, providing more details where they were missing * Additional plots were added as empirical proof to some of our hypotheses * We've added both the pseudo-code as well as link to a Theano-based implementation of the algorithm
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
XXo-vXWa-ZvQL
review
1,363,288,920,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: The revised arxiv paper is available now, and we replied to the reviewers comments.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
ttBP0QO8pKtvq
review
1,361,998,920,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6a77" ]
ICLR.cc/2013/conference
2013
title: review of Natural Gradient Revisited review: GENERAL COMMENTS The paper promises to establish the relation between Amari's natural gradient and many methods that are called Natural Gradient or can be related to Natural Gradient because they use Gauss-Newton approximations of the Hessian. The problem is that I find the paper misleading. In particular the G of equation (1) is not the same as the G of equation (7). The author certainly points out that the crux of the matter is to understand which distribution is used to approximate the Fisher information matrix, but the final argument is a mess. This should be done a lot more rigorously (and a lot less informally.) As the paper stands, it only increases the level of confusion. SPECIFIC COMMENTS * (ichi Amari, 1997) - > (Amari, 1997) * differ -> defer * Due to this surjection: A surjection is something else! * Equation (1): please make clear that the expectation is an expectation on z distributed according p_theta (not the ground truth nor the empirical distribution). Equation (7) then appears to be a mix of both. * becomes the conditional p_ heta(t|x) where q(x) represents: where is q in p_ heta(t|x)
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
_MfuTMZ4u7mWN
review
1,364,251,020,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6a77" ]
ICLR.cc/2013/conference
2013
review: Clearly, the revised paper is much better than the initial paper to the extent that it should be considered a different paper that shares its title with the initial paper. The ICLR committee will have to make a policy decision about this. The revised paper is poorly summarized by it abstract because it does not show things in the same order as the abstract. The paper contains the following: * A derivation of natural gradient that does not depend on information geometry. This derivation is in fact well known (and therefore not new.) * A clear discussion of which distribution should be used to compute the natural gradient Riemannian tensor (equation 8). This is not new, but this is explained nicely and clearly. * An illustration of what happens when one mixes these distributions. This is not surprising, but nicely illustrates the point that many so-called 'natural gradient' algorithms are not the same as Amari's natural gradient. * A more specific discussion of the difference between LeRoux 'natural gradient' and the real natural gradient with useful intuitions. This is a good clarification. * A more specific discussion of how many second order algorithms using the Gauss-Newton approximation are related to some so-called natural gradient algorithms which are not the true natural gradient. Things get confusing because the authors seem committed to calling all these algorithms 'natural gradient' despite their own evidence. In conclusion, although novelty is limited, the paper disambiguates some of the confusion surrounding natural gradient. I simply wish the authors took their own hint and simply proposed banning the words 'natural gradient' to describe things that are not Amari's natural gradient but are simply inspired by it.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
j5Y_3gJAHK3nP
comment
1,363,217,040,000
26sD6qgwF8Vob
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf * Regarding the relationship between Hessian-Free and natural gradient, it stems from the fact that algebraic manipulations of the extended Gauss-Newton approximation of the Hessian result in the natural gradient metric. Due to space limit (the paper is quite lengthly) we do not provide all intermediate steps in this algebraic manipulation, but we do provide all the crucial ones. Both natural gradient and Hessian Free have the same form (the gradient is multiplied by the inverse of a matrix before being subtracted from theta, potentially times some scalar learning rate). Therefore showing that both methods use the same matrix is sufficient to show that HF can be interpreted as natural gradient. * The degeneracy of theta were meant to suggest only that we are dealing with a lower dimensional manifold. We completely agree however that the text was confusing and it was completely re-written to avoid that potential confusion. In the re-write we've removed this detail as it is not crucial for the paper. * The relations at the end of page 2 do hold in general, as the expectation is taken over z (a detail that we specify now). We are not using a fully Bayesian framework, i.e. theta is not a random variable in the text. * Equation 15 was corrected. When computing the Hessian, we compute the derivatives with respect to `r`.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
wiYbiqRc-GqXO
review
1,362,084,780,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: Thank you for your comments. We will soon push a revision to fix all the grammar and language mistakes you pointed out. Regarding equation (1) and equation (7), mathbf{G} represents the Fisher Information Matrix form of the metric resulting when you consider respectively p(x) vs p(y|x). Equation (1) is introduced in section 1, which presents the generic case of a family of distributions p_{ heta}(x). From section 3 onwards we adapt these equations specifically to neural networks, where, from a probabilistic point of view, we are dealing with conditional probabilities p(y|x). Could you please be more specific regarding the elements of the paper that you found confusing? We would like to reformulate the conclusion to make our contributions clearer. The novel points we are trying to make are: (1) Hessian Free optimization and Krylov Subspace Descent, as long as they use the Gauss-Newton approximation of the Hessian, can be understood as Natural Gradient, because the Gauss-Newton matrix matches the metric of Natural Gradient (and the rest of the pipeline is the same). (2) Possibly due to the regularization effect discussed in (6), we hypothesize and support with empirical results that Natural Gradient helps dealing with the early overfitting problem introduced by Erhan et al. This early overfitting problem might be a serious issue when trying to scale neural networks to large models with very large datasets. (3) We make the observation that since the targets get integrated out when computing the metric of Natural Gradient, one can use unlabeled data to improve the accuracy of this metric that dictates the speed with which we move in parameter space. (4) Natural Gradient introduced by Nicolas Le Roux et al has a fundamental difference with Amari's. It is not just a different justification, but a different algorithm that might behave differently in practice. (5) Natural Gradient is different from a second order method because while one uses second order information, it is not the second order information of the error function, but of the KL divergence (which is quite different). For e.g. it is always positive definite by construction, while the curvature is not. Also, when considering the curvature of the KL, is not the curvature of the same surface throughout learning. At each step we have a different KL divergence and hence a different surface, while for second order methods the error surface stays constant through out learning. The second distinction is that Natural Gradient is naturally suited for online learning, provided that we have sufficient statistics to estimate the KL divergence (the metric). Theoretically, second order methods are meant to be batch methods (because the Hessian is supposed over the whole dataset) where the Natural Gradient metric only depends on the model. (6) The standard understanding of Natural Gradient is that by imposing the KL divergence between p_{theta}(y|x) and p_{theta+delta}(y|x) to be constant it ensures that some amount of progress is done at every step and hence it converges faster. We add that it also ensures that you do not move too far in some direction (which would make the KL change quickly), hence acting as a regularizer. Regarding the paper not being formal enough we often find that a dry mathematical treatment of the problem does not help improving the understanding or eliminating confusions. We believe that we were formal enough when showing the equivalence between the generalized Gauss-Newton and Amari's metric. Point (6) of our conclusion is a hypothesis which we validate empirically and we do not have a formal treatment for it.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
26sD6qgwF8Vob
review
1,362,404,760,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 1939" ]
ICLR.cc/2013/conference
2013
title: review of Natural Gradient Revisited review: This paper attempts to reconcile several definitions of the natural gradient, and to connect the Gauss-Newton approximation of the Hessian used in Hessian free optimization to the metric used in natural gradient descent. Understanding the geometry of objective functions, and the geometry of the space they live in, is crucial for model training, and is arguably the greatest bottleneck in training deep or otherwise complex models. However, this paper makes a confused presentation of the underlying ideas, and does not succeed in clearly tying them together. More specific comments: In the second (and third) paragraph of section 2, the natural gradient is discussed as if it stems from degeneracies in theta, where multiple theta values correspond to the same distribution p. This is inaccurate. Degeneracies in theta have nothing to do with the natural gradient. This may stem from a misinterpretation of the role of symmetries in natural gradient derivations? Symmetries are frequently used in the derivation of the natural gradient, in that the metric is frequently chosen such that it is invariant to symmetries in the parameter space. However, the metric being invariant to symmetries does not mean that p is similarly invariant, and there are natural gradient applications where symmetries aren't used at all. (You might find The Natural Gradient by Analogy to Signal Whitening, Sohl-Dickstein, http://arxiv.org/abs/1205.1828 a more straightforward introduction to the natural gradient.) At the end of page 2, between equations 2 and 3, you introduce relations which certainly don't hold in general. At the least you should give the assumptions you're using. (also, notationally, it's not clear what you're taking the expectation over -- z? theta?) Equation 15 doesn't make sense. As written, the matrices are the wrong shape. Should the inner second derivative be in terms of r instead of theta? The text has minor English difficulties, and could benefit from a grammar and word choice editing pass. I stopped marking these pretty early on, but here are some specific suggested edits: 'two-folded' -> 'two-fold' 'framework of natural gradient' -> 'framework of the natural gradient' 'gradient protects about' -> 'gradient protects against' 'worrysome' -> 'worrisome' 'even though is called the same' -> 'despite the shared name' 'differ' -> 'defer' 'get map' -> 'get mapped'
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
0mPCmj67CX0Ti
review
1,364,262,660,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 1939" ]
ICLR.cc/2013/conference
2013
review: As the previous reviewer states, there are very large improvements in the paper. Clarity and mathematical precision are both greatly increased, and reading it now gives useful insight into the relationship between different perspectives and definitions of the natural gradient, and Hessian based methods. Note, I did not check the math in Section 7 upon this rereading. It's misleading to suggest that the author's derivation in terms of minimizing the objective on a fixed-KL divergence shell around the current location (approximated as a fixed value of the second order expansion of the Fisher information) is novel. This is something that Amari also did (see for instance the proof of Theorem 1 on page 4 in Amari, S.-I. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2), 251–276. doi:10.1162/089976698300017746. This claim should be removed. It could still use an editing pass, and especially improvements in the figure captions, but these are nits as opposed to show-stoppers (see specific comments below). This is a much nicer paper. My only significant remaining concerns are in terms of the Lagrange-multiplier derivation, and in terms of precedent setting. It would be that it's a dangerous precedent to set (and promises to make much more work for future reviewers!) to base acceptance decisions on rewritten manuscripts that differ significantly from the version initially submitted. So -- totally an editorial decision. p. 2, footnote 2 -- 3rd expression should still start with sum_z 'emphesis' -> emphasize' 'to speed up' -> 'to speed up computations' 'train error' -> 'training error' Figure 2 -- label panes (a) and (b) and reference as such. 'KL, different training minibatch' appears to be missing from Figure. In latex, use ` for open quote and ' for close quote. capitalize kl. So, for instance, `KL, unlabeled' Figure 3 -- Caption has significant differences from figure in most places where it occurs, should refer to 'the natural gradient' rather than 'natural gradient' 'equation (24) from section 3' -- there is no equation 24 in section 3. Equation and Section should be capitalized.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
_ia0VPOP0SVPj
review
1,362,189,120,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer 600b" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: Herding is a relatively recent idea [23]: create a dynamical system that evolves a vector, which when time-averaged will match desired expectations. Originally it was designed as a novel means to generalize from observed data with measured moments. In this work, the conditional distributions of a Gibbs sampler are matched, with the hope of sampling from arbitrary target distributions. As reviewed by the paper itself, this work joins only a small number of recent papers that try to simulate arbitrary target distributions using a deterministic dynamical system. Compared to [19] this work potentially works better in some situations: O(1/T) convergence can happen, whereas [19] seems to emulate a conventional Gibbs sampler with O(1/T^2) convergence. However, the current work seems to be more costly in memory and less-generally applicable than Gibbs sampling, because it needs to track weights for all possible conditional distributions (all possible neighbourhood settings for each variable) in some cases. The comparison to [7] is less clear, as that is motivated by O(1/T) QMC rates, but I don't know if/how it would compare to the current work. (No comparison is given.) One of the features of Markov chain Monte Carlo methods, such as Gibbs sampling, is that represents _joint_ distributions, through examples. Unlike variational approximation methods, no simple form of the distribution is assumed, but Monte Carlo sampling may be a less efficient way to get marginal distributions. For example, Kuss and Rasmussen http://www.jmlr.org/papers/volume6/kuss05a/kuss05a.pdf demonstrated that EP gives exceedingly accurate posterior marginals with Gaussian process classifiers, even though its joint approximation, a Gaussian, is obviously wrong. The experiment in section 4.1 suggests that the herded Gibbs procedure is prepared to move through low probability joint settings more often than it 'should', but gets better marginals as a result. The experiment section 4.2 also depends only on low-dimensional marginals (as many applications do). The experiment in section 4.3 involves an optimization task, and I'm not sure how herded Gibbs was applied (also with annealing? The most probable sample chosen? ...). This is an interesting, novel paper, that appears technically sound. The most time-consuming research contributions are the proofs in the appendices, which seem plausible, but I have not carefully checked them. As discussed in the conclusion, there is a gap between the applicability of this theory and the applicability of the methods. But there is plenty in this paper to suggest that herded sampling for generic target distributions is an interesting direction. As requested, a list of pros and cons: Pros: - a novel approach to sampling from high-dimensional distributions, an area of large interest. - Good combination of toy experiments, up to fairly realistic, but harder to understand, demonstration. - Raises many open questions: could have impact within community. - Has the potential to be both general and fast to converge: in long term could have impact outside community. Cons: - Should possibly compare to Owen's work on QMC and MCMC. Although there may be no interesting comparison to be made. - The most interesting example (NER, section 4.3) is slightly hard to understand. An extra sentence or two could help greatly to state how the sampler's output is used. - Code could be provided. Very minor: paragraph 3 of section 5 should be rewritten. It's wordy: 'We should mention...We have indeed studied this', and uses jargon that's explained parenthetically in the final sentence but not in the first two.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
-wDkwa3mkYwTa
review
1,362,382,860,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer b2c5" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: This paper shows how Herding, a deterministic moment-matching algorithm, can be used to sample from un-normalized probabilities, by applying Herding to the full-conditional distributions. The paper presents (1) theoretical proof of O(1/T) convergence in the case of empty and fully-connected graphical models, as well as (2) empirical evidence, showing that Herded Gibbs sampling outperforms both Gibbs and mean-field for 2D structured MRFs and chain structured CRFs. This improved performance however comes at the price of memory, which is exponential in the maximum in-degree of the graph, thus making the method best suited to sparsely connected graphical models. While the application of Herding to sample from joint-distributions through its conditionals may not appear exciting at first glance, I believe this represents a novel research direction with potentially high impact. A 1/T convergence rate would be a boon in many domains of application, which tend to overly rely on Gibbs sampling, an old and often brittle sampling algorithm. The algorithm's exponential memory requirements are somewhat troubling. However, I believe this can be overlooked given the early state of research and the fact that sparse graphical models represent a realistic (and immediate) domain of application. The paper is well written and clear. I unfortunately cannot comment on the correctness of the convergence proofs (which appear in the Appendix), as those proved to be too time-consuming for me to make a professional judgement on. Hopefully the open review process of ICLR will help weed out any potential issues therein. PROS: * A novel sampling algorithm with faster convergence rate than MCMC methods. * Another milestone for Herding: sampling for un-normalized probabilities (with tractable conditionals). * Combination of theoretical proofs (when available) and empirical evidence. * Experiments are thorough and span common domains of application: image denoising through MRFs and Named Entity Recognition through chain-CRFs. CONS: * Convergence proofs hold for less than practicle graph structures. * Exponential memory requirements of the algorithm make Herded Gibbs sampling impractical for lage families of graphical models, including Boltzmann Machines.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
55Sf5h7-bs1wC
review
1,363,408,140,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Luke Bornn, Yutian Chen, Nando de Freitas, Mareija Eskelin, Jing Fang, Max Welling" ]
ICLR.cc/2013/conference
2013
review: Taking the reviewers' comments into consideration, and after many useful email exchanges with experts in the field including Prof Art Owen, we have prepared a newer version of the report. If it is not on Arxiv by the time you read this, you can find it at http://www.cs.ubc.ca/~nando/papers/herding_ICLR.pdf Reviewer Anonymous 600b: We have made the code available and expanded our description of the CRF for NER section. An empirical comparison with the work of Art Owen and colleagues was not possible given the short time window of this week. However, we engaged in many discussions with Art and he not only added his comments here in open review, but also provided many useful comments via email. One difference between herding and his approach is that herding is greedy (that is, the random sequence does not need to be constructed beforehand). Art also pointed us out to the very interesting work of James Propp and colleagues on Rotor-Router models. Please see our comments in the last paragraph of the Conclusions and Future Work section of the new version of the paper. Prof Propp has also begun to look at the problem of establishing connections between herding and his work. Reviewer Anonymous cf4e: For marginals, the convergence rate of herded Gibbs is also O(1/T) because marginal probabilities are linear functions of the joint distribution. However, in practice, we observe very rapid convergence results for the marginals, so we might be able to strengthen these results in the future. Reviewer Anonymous 2d06: We have added more detail to the CRF section and made the code available so as to ensure that our results are reproducible. We thank all reviewers for excellent comments. This openreview discussion has been extremely useful and engaging. Many thanks, The herded Gibbs team
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
kk_CoX43Cfks-
review
1,363,761,180,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Maya Baya" ]
ICLR.cc/2013/conference
2013
review: The updated Herded Gibbs report is now available on arxiv at the following url: http://arxiv.org/abs/1301.4168v2 The herded Gibbs team.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
OOw6hkBUq_fEr
review
1,362,793,920,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Maya Baya" ]
ICLR.cc/2013/conference
2013
review: Dear reviewers, Thank you for the encouraging reviews and useful feedback. We will soon address your questions and comments. To this end, we would like to begin by announcing that the code is available online in both matlab and python, at: http://www.mareija.ca/research/code/ This code contains both the image denoising experiments and the two node example, however, we have omitted the NER experiment because the code is highly dependent on the Stanford NER software. Nonetheless, upon request, we would be happy to share this more complex code too. A comprehensive reply and a newer version of the arxiv paper addressing your concerns will appear soon. In the meantime, we look forward to further comments. The herded Gibbs team.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
cAhZAfXPZ6Sfw
review
1,362,189,120,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer 600b" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: Herding is a relatively recent idea [23]: create a dynamical system that evolves a vector, which when time-averaged will match desired expectations. Originally it was designed as a novel means to generalize from observed data with measured moments. In this work, the conditional distributions of a Gibbs sampler are matched, with the hope of sampling from arbitrary target distributions. As reviewed by the paper itself, this work joins only a small number of recent papers that try to simulate arbitrary target distributions using a deterministic dynamical system. Compared to [19] this work potentially works better in some situations: O(1/T) convergence can happen, whereas [19] seems to emulate a conventional Gibbs sampler with O(1/T^2) convergence. However, the current work seems to be more costly in memory and less-generally applicable than Gibbs sampling, because it needs to track weights for all possible conditional distributions (all possible neighbourhood settings for each variable) in some cases. The comparison to [7] is less clear, as that is motivated by O(1/T) QMC rates, but I don't know if/how it would compare to the current work. (No comparison is given.) One of the features of Markov chain Monte Carlo methods, such as Gibbs sampling, is that represents _joint_ distributions, through examples. Unlike variational approximation methods, no simple form of the distribution is assumed, but Monte Carlo sampling may be a less efficient way to get marginal distributions. For example, Kuss and Rasmussen http://www.jmlr.org/papers/volume6/kuss05a/kuss05a.pdf demonstrated that EP gives exceedingly accurate posterior marginals with Gaussian process classifiers, even though its joint approximation, a Gaussian, is obviously wrong. The experiment in section 4.1 suggests that the herded Gibbs procedure is prepared to move through low probability joint settings more often than it 'should', but gets better marginals as a result. The experiment section 4.2 also depends only on low-dimensional marginals (as many applications do). The experiment in section 4.3 involves an optimization task, and I'm not sure how herded Gibbs was applied (also with annealing? The most probable sample chosen? ...). This is an interesting, novel paper, that appears technically sound. The most time-consuming research contributions are the proofs in the appendices, which seem plausible, but I have not carefully checked them. As discussed in the conclusion, there is a gap between the applicability of this theory and the applicability of the methods. But there is plenty in this paper to suggest that herded sampling for generic target distributions is an interesting direction. As requested, a list of pros and cons: Pros: - a novel approach to sampling from high-dimensional distributions, an area of large interest. - Good combination of toy experiments, up to fairly realistic, but harder to understand, demonstration. - Raises many open questions: could have impact within community. - Has the potential to be both general and fast to converge: in long term could have impact outside community. Cons: - Should possibly compare to Owen's work on QMC and MCMC. Although there may be no interesting comparison to be made. - The most interesting example (NER, section 4.3) is slightly hard to understand. An extra sentence or two could help greatly to state how the sampler's output is used. - Code could be provided. Very minor: paragraph 3 of section 5 should be rewritten. It's wordy: 'We should mention...We have indeed studied this', and uses jargon that's explained parenthetically in the final sentence but not in the first two.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
wy2cwQ8QPVybX
review
1,362,377,040,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer cf4e" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: Herding has an advantage over standard Monte Carlo method, in that it estimates some statistics quickly, while Monte Carlo methods estimate all statistics but more slowly. The paper presents a very interesting but impractical attempt to generalize Herding to Gibbs sampling by having a 'herding chain' for each configuration of the Markov blanket of the variables. In addition to the exponential memory complexity, it seems like the method should have an exponentially large constant hidden in the O(1/T) convergence rate: Given that there are many herding chains, each herding parameter would be updated extremely infrequently, which would result in an exponential slowdown of the Herding effect and thus increase the constant in O(1/T). And indeed, lambda from theorem 2 has a 2^N factor. The theorem is interesting in that it shows eventual O(1/T) convergence in full distribution: that is, the empirical joint distribution eventually converges to the full joint distribution. However, in practice we care about estimating marginals and not joints. Is it possible to show fast convergence on every subset of the marginals, or even on the singleton variables? Can it be done with a favourable constant? Can such a result be derived from the theorems presented in the paper? Results about marginals would be of more practical interest. The experiments show that the idea works in principle, which is good. In its current form, the paper presents a reasonable idea but is incomplete, since the idea is too impractical. It would be great if the paper explored a practical implementation of Gibbs herding, even an is approximate one. For example, would it be possible to represent w_{X_{Ni}} with a big linear function A X_{Ni} for all X and to herd A, instead of slowly herding the various W_{X_{Ni}}? Would it work? Would it do something sensible on the experiments? Can it be proved to work in a special case? In conclusion, the paper is very interesting and should be accepted. Its weakness is the general impracticality of the method.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
PHnWHNpf5bHUO
review
1,363,212,720,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Art Owen" ]
ICLR.cc/2013/conference
2013
review: Nando asked me for some comments and then he thought I should share them on openreview. So here they are. There are a few other efforts at replacing the IID numbers which drive MCMC. It would be interesting to explore the connections among them. Here is a sample: Jim Propp and others have been working on rotor-routers for quite a while. Here is one link: http://front.math.ucdavis.edu/0904.4507 I've been working with several people on replacing IID numbers by completely uniformly distributed (CUD) ones. This is like taking a small random number generator and using it all up. See this thesis by Su Chen for the latest results, and lots of references: www-stat.stanford.edu/~owen/students/SuChenThesis.pdf or for earlier work, this thesis by Seth Tribble: www-stat.stanford.edu/~owen/students/SethTribbleThesis.pdf The oldest papers in that line of work go back to the late 1960s and early 1970s by Chentsov and also by Sobol'. There is some very recent work by Dick Rudolf and Zhu: http://arxiv.org/abs/1303.2423 that is similar to herding. The idea there is to make a followup sample of values that fill in holes left after a first sampling. Not quite as close to this work but still related is the array-RQMC work of Pierre L'Ecuyer and others. See for instance: www.iro.umontreal.ca/~lecuyer/myftp/papers/mcqmc08-array.pdf
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
rafTmpD60FrZR
review
1,362,497,280,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer 2d06" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: The paper presents a deterministic 'sampling' algorithm for unnormalized distributions on discrete variables, similar to Gibbs sampling, which operates by matching the statistics of the conditional distribution of each node given its Markov blanket. Proofs are provided for the independent and fully-connected cases, with an impressive improvement in asymptotic convergence rate to O(1/T) over O(1/sqrt(T)) available from Monte Carlo methods in the fully-connected case. Experimental results demonstrate herded Gibbs outperforming traditional Gibbs sampling in the sparsely connected case, a regime unfortunately not addressed by the provided proofs. The algorithm's Achilles heel is its prohibitive worst-case memory complexity, scaling exponentially with the maximal node degree of the network. The paper is compelling for its demonstration that a conceptually simple deterministic procedure can (in some cases at least) greatly outperform Gibbs sampling, one of the traditional workhorses of Monte Carlo inference, both asymptotically and empirically. Though the procedure in its current form is of little use in large networks of even moderate edge density, the ubiquity of application domains involving very sparse interaction graphs makes this already an important contribution. The proofs appear to be reasonable upon cursory examination, but I have not as yet verified them in detail. PROS * A lucidly explained idea that gives rise to somewhat surprising theoretical results. * Proofs of convergence as well as experimental interrogations. * A step towards practical herding algorithms for dense unnormalized models, and an important milestone for the literature on herding in general. CONS * An (acknowledged) disconnect between theory and practice -- available proofs apply only in cases that are uninteresting or impractical. * Experiments in 4.3 make mention of NER with skip-chain CRFs, where Viterbi is not tractable, but resorts to experiments with chain CRFs instead. An additional experiment utilizing skip-chain CRFs (a more challenging inference task, not amenable to Viterbi) would have been more compelling, though I realize space is at a premium. Minor concerns: - The precise dimensionality of the image denoising problem is, as far as I can tell, never specified. This would be nice to know. - More details as to how the herded Gibbs procedure maps onto the point estimate provided as output on the NER task would be helpful -- presumably the single highest-probability sample is used?
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
wX4ew9_0vK2CA
comment
1,363,246,260,000
6s7Ys8Q5JbfHZ
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > The exposition on curriculum learning could be condensed. (minor) The demonstrative problem (sprite counting) is a visual perception problem and therefore carries with it the biases of our own perception and inferred strategies. Maybe the overall argument might be bolstered by the addition of a more abstract example? Yes that's right. We have performed an experiment in which all the possible bit configurations of a single patch were enumerated (there are 80 of them plus the special case where no object is present). With this input representation, there is no vision prior knowledge that can help a human learner. The task is strangely easier but still difficult: we achieved 25% test error up to now, i.e., better than chance (i.e. 50%) but far from the less than 1% error of the IKGNN. In future work, we will measure how well humans learn that task. > why so many image regions? Why use an 8x8 grid? Won't 3 regions suffice to make the point? Or is this related to the complexity of the problem. A related question: how are the results effected by the number of these regions? Maybe some reduced tests at the extremes would be interesting, i.e. with only 3 regions, and 32 (you have 64 already)? The current architecture is trained on 64=8x8 patches, with sprites centered inside the patches. We have also tried to train the IKGNN on 16x16 patches, corresponding to 4x4=16 patches in a same-size image, and we allowed the objects to be randomly translated inside the patch but IKGNN couldn't learn the task. Probably because of the translation, the P1NN required a convolutional architecture. We have also conducted experiments with the tetromino dataset (which is not in that paper) that has 16x16 images and objects are placed in 4x4 patches with less variations and 7 sprite categories. An ordinary MLP with 3 tanh hidden layers was able to learn this task after a very long training. We trained Structured MLP on the 3 patches(8x8) that has sprite in it for 120 training epochs with 100k training examples. The best result we could get with that setting is 37 percent error on training set and SMLP was still doing chance on the test set. In a nutshell, reducing the number of regions and centering the objects inside each sprite in those regions implies reducing the complexity of the problem and yes if you reduce the complexity of the problem, you reduce the complexity of the task and you start seeing models that can learn it, with ordinary MLPs learning the task after a very long training on a large training set. > In the networks that solve the task, are the weights that are learned symmetric over the image regions? i.e. are these weights identical (maybe up to some scaling and sign flip). Is there anything you have determined about the structure of the learned second layer of the IKGNN? In the first layer on each patch, we trained exactly the same (first level) MLP on each patch, while the second level MLP is trained on the standardized softmax probabilites of the first level. Hence the weights are shared across patches in the first level. The first level of IKGNN (P1NN) has translation equivariance, but the second level (P2NN) is fully-connected and does not have any prior knowledge of symmetries. > Furthermore, what about including a 'weight sharing' constraint in the general MLP model (the one that does not solve the problem, but has the same structure as the one that does)? Would including this constraint change the solution? (the constraint is already in the P1NN, but what about adding it into the P2NN?) Another way to ask this is: Is enforcing translation invariance in the network sufficient to achieve good performance, or do we need to specifically train for the sprite discrimination? Indeed, it would be possible to use a convolutional architecture (with pooling, because the output is for the whole image) for the second level as well. We have not tried that yet but we agree that it would indeed be an interesting possibility and we certainly plan to try it out. Up to now, though, we have found that enforcing translation equivariance (in the lower level) was important but not sufficient to solve the problem. Indeed, the poor result obtained by the structured MLP demonstrates that. > Do we know if humans can solve this problem 'in a glance?': flashing the image for a small amount of time ~100-200msecs. Either with or without a mask? It seems that the networks you have derived are solving such a problem 'in a glance.' We didn't conduct any trials for measuring response times and learning speed of human subjects on this dataset. However, we agree such a study would be an important follow-up to this paper. > Is there an argument to be made that the sequential nature of language allows humans to solve this task? Even the way you formulate the problem suggests this sequential process: 'are all of the sprites in the image the same?': in other words 'find the sprites, then decide if they are the same' When I imagine solving this problem myself, I imagine performing a more sequential process: look at one sprite, then the next, (is it the same?, if it is): look at the next sprite (is it the same?). I know that we can consider this problem to be a concrete example of a more abstract learning problem, but it's not clear if humans can solve such problems without sequential processing. Anyway, this is not a criticism, per se, just food for thought. Yes we agree that the essence of the tasks requires a sequential processing and you can also find this sequential processing in our IKGNN architecture as well (and in deep architectures in general). P1NN looks at each patch and identifies the type of objects inside that patch and P2NN decides if the objects identified by P1NN has a different object. What is less clear is whether humans solve such problems by re-using the same 'hardware' (as in a recurrent net) or by composing different computations (e.g., associated with different areas in the brain). There are few studies that investigates the sequential learning in non-human primates which you might find interesting [3]. [3] Conway, Christopher M., and Morten H. Christiansen. 'Sequential learning in non-human primates.' Trends in cognitive Sciences 5, no. 12 (2001): 539-546.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
TiDHTEGclh1ro
review
1,362,381,600,000
SSnY462CYz1Cu
[ "everyone" ]
[ "anonymous reviewer 858d" ]
ICLR.cc/2013/conference
2013
title: review of Knowledge Matters: Importance of Prior Information for Optimization review: The paper by Gulcehre & Bengio entitled 'Knowledge Matters: Importance of Prior Information for Optimization' presents an empirical study which compares a two-tiered MLP architecture against traditional algorithms including SVM, decision trees and boosting. Images used for this task are 64x64 pixel images containing tetris-like sprite shapes. The proposed task consists in trying to figure out whether all the sprites in the image are from the same category or not (invariant to 2D transformations). The main result from this study is that intermediate guidance (aka building by hand an architecture which 'exploits intermediate level concepts' by dividing the problem in two stages (a classification stage followed by a XOR stage) solves the problem for which a 'naive' neural net (as well as classical machine learning algorithms) fail. Pros: The proposed task is relatively interesting as it offers an alternative to traditional pattern matching tasks used in computer vision. The experiments seem well conducted. The fact that a neural network and other universal approximators do not seem to even get close to learning the task with ~80K training examples is relatively surprising. Cons: The work by Fleuret et al (Comparing machines and humans on a visual categorization test. PNAS 2011) needs to be discussed. This paper focuses on a single task which appears to be a special case from the longer list of 'reasoning' tasks proposed by Fleuret et al. In addition, the proposed study reports a null result, which is of course always a little problematic (the fact that the authors did not manage to train a classical NN to solve the problem does not mean it is impossible). At the same time, the authors have explored reasonably well the space of hyper parameters and seem to have done their best in getting the NN to succeed. Minor points: The structure of the paper is relatively confusing. Sections 1.1 and 2 provide a review of some published work by the authors and does not appear to be needed for understanding the paper. In my view the paper could be shortened or at least most of the opinions/speculations in the introduction should be moved to the discussion section.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
nMIynqm1yCndY
review
1,362,784,440,000
SSnY462CYz1Cu
[ "everyone" ]
[ "David Reichert" ]
ICLR.cc/2013/conference
2013
review: I would like to add some further comments for the purpose of constructive discussion. The authors try to provide further insights into why and when deep learning works, and to broaden the focus of the kind of questions usually asked in this community, in particular by making connections to biological cognition and learning. I think this a good motivation. There are some issues that I would like to address though. At the core of this work is the result that algorithms can fail at solving the given classification task unless 'intermediate learning cues' are supplied. The authors cover many different algorithms to make this point. However, I think it would have been helpful to provide more empirical or theoretical analysis into *why* these algorithms fail, and what makes the task difficult. In particular, at what point does the complexity come in? Is the difficulty of the task qualitative or quantitative? The task would be qualitatively the same with just three patches and three categories of objects, or perhaps even just three multinomial units as input. I would be curious to see at least an empirical analysis into this question, by varying the complexity of the task, not just the types of algorithms and their parameters. As for answering the question of what makes the task difficult, the crux appears to be that the task implicitly requires invariant object recognition: to solve the second stage task (are all objects of the same category?), the algorithm essentially has to solve the problem of invariant object recognition first (what makes a category?). As the authors have shown, given the knowledge about object categories, the second stage task becomes easy to solve. It is interesting that the weak supervision signal provided in stage two alone is not enough to guide the algorithm to discover the object categories first, but I'm not sure that it is that surprising. Once the problem of invariant recognition has been identified, I don't think it is that 'surprising' either that unsupervised learning did not help at all. No matter how much data and how clever the algorithm, there is simply no way for an unsupervised algorithm to discover that a given tetris object and its rotated version are in some sense the same thing. This knowledge is however necessary to solve the subsequent same/different task across categories. An algorithm can only learn invariant object recognition given some additional information, either with explicit supervision or with more structure in the data and some in-built inductive biases (some form of semi-supervised learning). In this light, it is not clear to me how the work relates to specifically 'cultural learning'. The authors do not model knowledge exchange between agents as such, and it is not clear why the task at hand would be one where cultural learning is particularly relevant. The general issue of what knowledge or inductive biases are needed to learn useful representations, in particular for invariant object recognition, is indeed very interesting, and I think seldom addressed in deep learning beyond building in translation invariance. For the example of invariant object recognition, learning from temporal sequences and building in biases about 'temporal coherence' or 'slowness' (Földiák 91, Wiskott & Sejnowski 02) have been suggested as solutions. This has indeed been explored in deep learning at least in one case (Mobahi et al, 09), and might be more appropriate to address the task at hand (with sequential images). I think that if the authors believe that cultural learning is an important ingredient to deep learning or an interesting issue on its own, they perhaps need to find a more relevant task and then show that it can solved with a model that really utilizes cultural learning specifically, not just general supervision. Lastly, an issue I am confused by: if the second stage task (given the correct intermediate results from the first stage) corresponds to an 'XOR-like' problem, how come a single perceptron in the second stage can solve it?
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
PJcXvClTX8vdE
comment
1,363,246,140,000
D5ft5XCZd1cZw
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > It is surprising that structured MLP does chance even on training set. On the other hand with 11 output units per parch this is perhaps not so surprising as the network has to fit everything into minimal representation. However one would expect to get better training set resuts with larger sizes. You should put such results into Table 1 and go to even larger sizes, like 100. We conducted experiments with the structured MLP(SMLP) using 11, 50 and 100 hidden units per patch in the final layer of the locally connected part, yielding to chance performance on both the training and the test set. The revision will have a table listing the results we obtained with different number of hidden units. > To continue on this, if you trained sparse coding with high sparsity on each patch you should get 1 in N representation for each instance (with 11x4x3 or more units). It would be good to see what the P2NN would do with such representation. I think this is the primary missing piece of this work. That's a very nice suggestion and indeed it was already in our list of experiments to investigate. We conducted several experiments by using a one-hot representation for each patch and we put the results on these datasets in the revision. > It is not quite fair to compare to humans as humans have prior knowledge, specifically of rotations, probably learned from seeing objects rotate. Humans are probably doing mental rotation (see [1]) instead of having rotation invariance, which indeed exploits one form of prior knowledge (learned or innate) or another (see [2]). We have modified the statement accordingly. We have also performed an experiment (reported in the revision) in which all the possible bit configurations of a single patch were enumerated (there are 80 of them plus the special case where no object is present). With this input representation, there is no vision prior knowledge that can help a human learner. The task is strangely easier but still difficult: we achieved 25% test error up to now, i.e., better than chance (i.e. 50%) but far from the less than 1% error of the IKGNN. In future work, we will measure how well humans learn that task. > I don't think 'Local descent hypothesis' is quite true. We don't just do local approximate descent. First we do one shot learning in hippocampus. Second, we do search for explanations and solutions and we do planning (both unconsciously and consciously). Sure having more agents helps it's a little like running a genetic algorithm - an algorithm that overcomes local minima. One-shot learning is not incompatible with local approximate descent. For example, allocating new parameters to an example to learn by heart is moving in the descent direction from the point of view of functional gradient descent. Searching for explanations and planning belong to the realm of inference. We have inference in many graphical models while training itself still proceeds by local approximate descent. And you are right that having multiple agents sharing knowledge is like running a genetic algorithm and helps overcome some of the local minima issues. > At the end of page 6 you say P1NN had 2048 units and P2NN 1024 but this is reversed in 3.2.2. Typo? Thanks for pointing to that typo. The numbers in 3.2.2 are correct. [1] Köhler, C., Hoffmann, K. P., Dehnhardt, G., & Mauck, B. (2005). Mental Rotation and Rotational Invariance in the Rhesus Monkey<i>(Macaca mulatta)</i>. Brain, Behavior and Evolution, 66(3), 158-166. [2] Corballis, Michael C. 'Mental rotation and the right hemisphere.' Brain and Language 57.1 (1997): 100-121.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
L8RreQWdPS3jz
review
1,363,278,840,000
SSnY462CYz1Cu
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
review: Replies for the reviewers' comments are prepared by the both authors of the paper: Yoshua Bengio and Caglar Gulcehre.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
D5ft5XCZd1cZw
review
1,361,980,800,000
SSnY462CYz1Cu
[ "everyone" ]
[ "anonymous reviewer ed64" ]
ICLR.cc/2013/conference
2013
title: review of Knowledge Matters: Importance of Prior Information for Optimization review: The paper give an example of a task that neural net solves perfectly when intermediate labels are provided but that is not solved at all by several machine learning algorithms including neural net when the intermediate labels are not provided. I consider the result important. Comments: It is surprising that structured MLP does chance even on training set. On the other hand with 11 output units per parch this is perhaps not so surprising as the network has to fit everything into minimal representation. However one would expect to get better training set resuts with larger sizes. You should put such results into Table 1 and go to even larger sizes, like 100. To continue on this, if you trained sparse coding with high sparsity on each patch you should get 1 in N representation for each instance (with 11x4x3 or more units). It would be good to see what the P2NN would do with such representation. I think this is the primary missing piece of this work. It is not quite fair to compare to humans as humans have prior knowledge, specifically of rotations, probably learned from seeing objects rotate. I don't think 'Local descent hypothesis' is quite true. We don't just do local approximate descent. First we do one shot learning in hippocampus. Second, we do search for explanations and solutions and we do planning (both unconsciously and consciously). Sure having more agents helps - it's a little like running a genetic algorithm - an algorithm that overcomes local minima. At the end of page 6 you say P1NN had 2048 units and P2NN 1024 but this is reversed in 3.2.2. Typo?
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
lLgil9MwiZ3Vu
review
1,363,246,680,000
SSnY462CYz1Cu
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
review: We have uploaded the revision of the paper to arxiv. The revision will be announced by Arxiv soon.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
OblAf-quHwf1V
comment
1,363,246,380,000
nMIynqm1yCndY
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > However, I think it would have been helpful to provide more empirical or theoretical analysis into *why* these algorithms fail, and what makes the task difficult. In particular, at what point does the complexity come in? Is the difficulty of the task qualitative or quantitative? The task would be qualitatively the same with just three patches and three categories of objects, or perhaps even just three multinomial units as input. I would be curious to see at least an empirical analysis into this question, by varying the complexity of the task, not just the types of algorithms and their parameters. We have done more experiments to explore the effect of the difficulty of the task. In particular, we considered three settings aimed at making the task gradually easier: (1) map each possible patch-level input vector into an integer (one out of 81=(1 for no-object + 10 x 4 x 2) and a corresponding one-hot 80-bit input vector (and feed the concatenation of these 64 vectors as input to a classifier), (2) map each possible patch-level input vector into a disentangled representation with 3 one-hot vectors (10 bits + 4 bits + 2 bits) in which the class can be read directly (and one could imagine as the best possible outcome of unsupervised pre-training), and (3) only retain the actual object categories (with only the first 10 bits per patch, for the 10 classes). We found that (2) and (3) can be learned perfectly while (1) can be partially learned (down to about 30% error with 80k training examples). So it looks like part of the problem (as we had surmised) is to separate class information from the factors, while somehow the image-like encoding is actually harder to learn from (probably an ill-conditioning problem) than the one-hot encoding per patch. > As for answering the question of what makes the task difficult, the crux appears to be that the task implicitly requires invariant object recognition: to solve the second stage task (are all objects of the same category?), the algorithm essentially has to solve the problem of invariant object recognition first (what makes a category?). As the authors have shown, given the knowledge about object categories, the second stage task becomes easy to solve. It is interesting that the weak supervision signal provided in stage two alone is not enough to guide the algorithm to discover the object categories first, but I'm not sure that it is that surprising. Visually much more complex tasks are being rather successfully handled with deep convolutional nets, as in the recent work by Kryzhevski & Hinton at NIPS 2012. It is therefore surprising that such a simplified task would make most learning algorithms fail. We believe it boils down to an optimization issue (the difficulty of training the lower layers well, in spite of correct supervised learning gradients being computed through the upper layers) and our experiments are consistent with that hypothesis. The experiments described above with disentangled inputs suggest that if unsupervised learning was doing an optimal job, it should be possible to solve the problem. > In this light, it is not clear to me how the work relates to specifically 'cultural learning'. The authors do not model knowledge exchange between agents as such, and it is not clear why the task at hand would be one where cultural learning is particularly relevant. The general issue of what knowledge or inductive biases are needed to learn useful representations, in particular for invariant object recognition, is indeed very interesting, and I think seldom addressed in deep learning beyond building in translation invariance. For the example of invariant object recognition, learning from temporal sequences and building in biases about 'temporal coherence' or 'slowness' (Földiák 91, Wiskott & Sejnowski 02) have been suggested as solutions. This has indeed been explored in deep learning at least in one case (Mobahi et al, 09), and might be more appropriate to address the task at hand (with sequential images). I think that if the authors believe that cultural learning is an important ingredient to deep learning or an interesting issue on its own, they perhaps need to find a more relevant task and then show that it can solved with a model that really utilizes cultural learning specifically, not just general supervision The main difficulty of this task stems from the composition of two distinct tasks, the first task is the invariant object recognition and second task is learning the logical relation between the objects in the image. Each task can be solved fairly easily on its own, otherwise IKGNN couldn't learn this task. But we claim that combination of these two tasks raises an optimization difficulty that the machine learning algorithms that we have tried failed to overcome. We are aware that slow features might be useful for solving this task and we plan to investigate that as well. We also believe that as such, temporal coherence would be a much more plausible explanation as to how humans learn such visual tasks, since humans learn to see quite well with little or no verbal cues from parents or teachers (and of course, all the other animals that have very good vision do not have a culture or one nearly as developed as that of humans). On the other hand, we believe that this kind of two-level abstraction learning problem illustrates a more general training difficulty that humans may face when trying to learn higher level abstractions (precisely of the kind that we need teachers for). Unfortunately there is not yet much work combining cultural learning and deep learning. This paper is meant to lay the motivational grounds for such work, by showing simple examples where we might need cultural learning and where ordinary supervised learning (without intermediate concepts guidance) or even unsupervised pre-training face a very difficult training challenge. The other connection is that these experiments are consistent with aspects of the cultural learning hypotheses laid down in Bengio 2012: if learning more abstract concepts (that require a deeper architecture that captures distinct abstractions, as in our task) is a serious optimization challenge, this challenge could also be an issue for brains, making it all the more important to explain how humans manage to deal with such problems (presumably thanks to the guidance of other humans, e.g., by providing hints about intermediate abstractions). We wanted to show that there are problems that are inherently hard for current machine learning algorithms and motivate cultural learning: distributed and parallelized learning of such higher level concepts might be more efficient for solving this kind of tasks. > Lastly, an issue I am confused by: if the second stage task (given the correct intermediate results from the first stage) corresponds to an 'XOR-like' problem, how come a single perceptron in the second stage can solve it? In the second stage THERE ARE HIDDEN UNITS. It is not a simple perceptron but a simple MLP. We have used a RELU MLP with 2048 hidden units and a sigmoid output trained with a crossentropy training objective.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
6s7Ys8Q5JbfHZ
review
1,362,262,800,000
SSnY462CYz1Cu
[ "everyone" ]
[ "anonymous reviewer dfef" ]
ICLR.cc/2013/conference
2013
title: review of Knowledge Matters: Importance of Prior Information for Optimization review: In this paper, the authors provide an exposition of curriculum learning and cultural evolution as solutions to the effective local minimum problem. The authors provide a detailed set of simulations that support a curriculum theory of learning, which rely on a supervisory training signal of intermediate task variables that are relevant for the task. Pros: This work is important to probe the limitations of current algorithms, especially as the deep learning field continues to have success. A great thing about this paper is that it got me thinking about new classes of algorithms that might effectively solve the mid-level optimization and more effective strategies for training deep networks for practical tasks. The simulations are well described and compelling. Cons: The exposition on curriculum learning could be condensed. (minor) The demonstrative problem (sprite counting) is a visual perception problem and therefore carries with it the biases of our own perception and inferred strategies. Maybe the overall argument might be bolstered by the addition of a more abstract example? Here are some questions: why so many image regions? Why use an 8x8 grid? Won't 3 regions suffice to make the point? Or is this related to the complexity of the problem. A related question: how are the results effected by the number of these regions? Maybe some reduced tests at the extremes would be interesting, i.e. with only 3 regions, and 32 (you have 64 already)? In the networks that solve the task, are the weights that are learned symmetric over the image regions? i.e. are these weights identical (maybe up to some scaling and sign flip). Is there anything you have determined about the structure of the learned second layer of the IKGNN? Furthermore, what about including a 'weight sharing' constraint in the general MLP model (the one that does not solve the problem, but has the same structure as the one that does)? Would including this constraint change the solution? (the constraint is already in the P1NN, but what about adding it into the P2NN?) Another way to ask this is: Is enforcing translation invariance in the network sufficient to achieve good performance, or do we need to specifically train for the sprite discrimination? A technical point about the assumption of human performance on this task: Do we know if humans can solve this problem 'in a glance?': flashing the image for a small amount of time ~100-200msecs. Either with or without a mask? It seems that the networks you have derived are solving such a problem 'in a glance.' A more meta comment: Is there an argument to be made that the sequential nature of language allows humans to solve this task? Even the way you formulate the problem suggests this sequential process: 'are all of the sprites in the image the same?': in other words 'find the sprites, then decide if they are the same' When I imagine solving this problem myself, I imagine performing a more sequential process: look at one sprite, then the next, (is it the same?, if it is): look at the next sprite (is it the same?). I know that we can consider this problem to be a concrete example of a more abstract learning problem, but it's not clear if humans can solve such problems without sequential processing. Anyway, this is not a criticism, per se, just food for thought.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
MF7RMafDRkF_A
comment
1,363,246,320,000
TiDHTEGclh1ro
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > The work by Fleuret et al (Comparing machines and humans on a visual categorization test. PNAS 2011) needs to be discussed. This paper focuses on a single task which appears to be a special case from the longer list of 'reasoning' tasks proposed by Fleuret et al. Yes we agree that some of the tasks in the Fleuret et al. paper are similar to our task. We cited this paper in the new revision. Thanks for pointing it out. The biggest difference between the Fleuret et al paper and our approach is that we purposely did not use any preprocessing, in order to make the task *difficult* and show the limitations of a vast range of learning algorithms. This highlights differences between the goals of those papers, of course. > In addition, the proposed study reports a null result, which is of course always a little problematic (the fact that the authors did not manage to train a classical NN to solve the problem does not mean it is impossible). At the same time, the authors have explored reasonably well the space of hyper parameters and seem to have done their best in getting the NN to succeed. We agree with that statement. Nonetheless, negative results (especially when they are confirmed by other labs) can have a powerful impact of research, by highlighting the limitations of current algorithms and thus directing research fruitfully towards addressing important challenges. It is unfortunately more difficult to publish negative results in our community, in part because computer scientists do not have as much as other scientists (such as biologists) the culture of replicating experiments and publishing these validations. > Minor points: The structure of the paper is relatively confusing. Sections 1.1 and 2 provide a review of some published work by the authors and does not appear to be needed for understanding the paper. In my view the paper could be shortened or at least most of the opinions/speculations in the introduction should be moved to the discussion section. We disagree. The main motivation for these experiments was to empirically validate some aspects of the hypotheses discussed in Bengio 2012 on local minima and cultural evolution. If learning more abstract concepts (that require a deeper architecture) is a serious optimization challenge, this challenge could also be an issue for brains, making it all the more important to explain how humans manage to deal with such problems (presumably thanks to the guidance of other humans, e.g., by providing hints about intermediate abstractions).
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
t-wFtMYSdpR8v
comment
1,362,994,020,000
y-XNy_0Refysb
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
reply: Please see the comment below (from March 3rd). We have updated the paper to incorporate your comments.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
y-XNy_0Refysb
review
1,362,214,260,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "anonymous reviewer 9aa4" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Learning of Domain-invariant Image Representations review: This paper focuses on multi-task learning across domains, where both the data generating distribution and the output labels can change between source and target domains. It presents a SVM-based model which jointly learns 1) affine hyperplanes that separate the classes in a common domain consisting of the source and the target projected to the source; and 2) a linear transformation mapping points from the target domain into the source domain. Positive points 1) The method is dead simple and seems technically sound. To the best of my knowledge it's novel, but I'm not as familiar with the SVM literature - I am hoping that another reviewer comes from the SVM community and can better assess its novelty. 2) The paper is well written and understandable 3) The experiments seem thorough: several datasets and tasks are considered, the model is compared to various baselines. The model is shown to outperform contemporary domain adaption methods, generalize to novel test categories at test time (which many other methods cannot do) and can scale to large datasets. Negative points I have one major criticism: the paper doesn't seem really focused on representation learning - it's more a paper about a method for multi-task learning across domains which learns a (shallow, linear) mapping from source to target. I agree - it's a representation but there's no real analysis or focus on the representation itself - e.g. what is being captured by the representation. The method is totally valid, but I just get the sense that it's a paper that could fit well with CVPR or ICCV (i.e. a good vision paper) where the title says 'represention learning', and a few sentences highlight the 'representation' that's being learned, however the method nor the paper's focus is really on learning interesting representations. On one hand I question its suitability for ICLR and it's appeal to the community (compared to CVPR/ICCV, etc.) but on the other hand, I think it's great to encourage diversity in the papers/authors at the conference and having a more 'visiony'-feeling paper is not a bad thing. Comments -------- Can you state up front what is meant by the asymmetry of the transform (e.g. when it's first mentioned)? Later on in the paper it becomes clear that it has to do with the source and target having different feature dimensions but it wasn't obvious to me at the beginning of the paper. Just before Eq (4) and (5) it says that 'we begin by rewriting Eq 1-3 with soft constraints (slack)'. But where are the slack variables in Eq 4?
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
u3MkubcB_YIB0
review
1,362,393,540,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "anonymous reviewer feb2" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Learning of Domain-invariant Image Representations review: This paper proposes to make domain adaptation and multi-task learning easier by jointly learning the task-specific max-margin classifiers and a linear mapping from a new target space to the source space; the loss function encourages the mapped features to lie on the correct side of the hyperplanes learned for each task of the hyperplanes of the max-margin classifiers. Experiments show that the mapping performs as well or better as existing domain adaptation methods, but can scale to larger problems while many earlier approaches are too costly. Overall the paper is clear, well-crafted, and the context and previous work are well presented. The idea is appealing in its simplicity, and works well. Pros: the idea is intuitive and well justified; it is appealing that the method is flexible and can tackle cases where labels are missing for some categories. The paper is clear and well-written. Experimental results are convincing enough; while the results are not outperforming the state of the art (results are within the standard error of previously published performance), the authors' argument that their method is better suited to cases where domains are more different seems reasonable and backed by their experimental results. Cons: this method would work only in cases where a simple general linear rotation of features would do a good job placing features in a favorable space. The method also gives a privileged role to the source space, while methods that map features to a common latent space have more symmetry; the authors argue that it is hard to guess the optimal dimension of the latent space -- but their method simply constrains it to the size of the source space, so there is no guarantee that this would be any more optimal.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
tWNADGgy0XWy2
comment
1,362,971,700,000
u3MkubcB_YIB0
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your review. In this paper we present a method that learns an asymmetric linear mapping between the source and target feature spaces. In general, the feature transformation learning can be kernelized (the optimization framework can be formulated as a standard QP). However, for this work we focus on the linear case because of it's scalability to a large number of data points. We show that using the linear framework we perform as well or better than other methods which learn a non-linear mapping. We learn a transformation between the target and source points which can be expressed by the matrix W in our paper. In this paper, we use this matrix to compute the dot product in the source domain between theta_k and the transformed target points (Wx^t_i). However, if we think of W (an asymmetric matrix) as begin decomposed as W = A'B, then the dot product function can be interpreted as theta_k'A'Bx^t_i. In other words it could be interpreted as the dot product in some common latent space between source points transformed by A and target points transformed by B. We propose learning the W matrix rather than A,B directly so that we do not have to specify the dimension of the latent space.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
JnShnsXduOpVA
review
1,362,367,200,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
review: Thank you for your feedback. We argue that the task of adapting representations across domains is one that is common to all representation learning challenges, including those based on deep architectures, metric learning methods, and max-margin transform learning. Our insight into this problem is to use the source classifier to inform the representation learned for the target data. Specifically, we jointly learn a source domain classifier and a representation for the target domain, such that the target points can be well classified in the source domain. We present a specific algorithm using an SVM classifier and testing on visual domains, however the principles of our method are applicable to both a range of methods for learning and classification (beyond SVM) as well as a range of applications (beyond vision). In addition, thank you for your comments section. We will clarify what is meant by asymmetric transform and modify the wording around equations (4-5) to reflect the math shown, which has soft constraints and no slack variables.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
Ua0HJI2r-Waro
review
1,362,783,240,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "anonymous reviewer 36a3" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Learning of Domain-invariant Image Representations review: The paper presents a new method for learning domain invariant image representations. The proposed approach simultaneously learns a linear mapping of the target features into the source domain and the parameters of a multi-class linear SVM classifier. Experimental evaluations show that the proposed approach performs similarly or better than previous art. The new algorithm presents computational advantages with respect to previous approaches. The paper is well written and clearly presented. It addresses an interesting problem proposing that has received attention in recent years. The proposed method is considerably simpler than competitive approaches with similar (or better) performance (in the setting of the reported experiments). The method is not very novel but manages to improve some drawbacks of previous approaches. Pros: - the proposed framework is fairly simple and the provided implementation details makes it easy to reproduce - experimental evaluation is presented, comparing the proposed method with several competing approaches. The amount of empirical evidence seems sufficient to back up the claims. Cons: - Being this method general, I think that it would have been very good to include an example with more distinct source and target feature spaces (e.g. text categorization), or even better different modalities. Comments: In the work [15], the authors propose a metric that measures the adaptability between a pair of source and target domains. In this setting if several possible source domains are available, it selects the best one. How could this be considered in your setting? In the first experimental setting (standard domain adaptation problem), I understand that the idea the experiment is to show how the labeled data in the source domain can help to better classify the data in the target domain. It is not clear to me how the SVM trained with training data, SVM_t, of the target domain. Is this done only with the limited set of labeled data in the target domain? What is the case for the SVM_s? Looking to the last experimental setting, I suppose that the SVM_s (trained using source training data) also includes the transformed data from the target domain. Otherwise, I don't understand how the performance can increase by increasing the number of labeled target examples.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
FPpzPM-IHKPkZ
comment
1,362,971,820,000
Ua0HJI2r-Waro
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your feedback. We would like to start by clarifying a few points from your comments section. First, our first experiment (standard domain adaptation setting) SVM_t is the classifier learned from being trained with only the limited available data from the target domain. So, for example when we're looking at the shift between amazon to webcam (a->w) we have a lot of training data from amazon and a very small amount of the webcam dataset. SVM_t for this example would be an SVM trained on just the small amount of data from webcam. Note that in the new category experiment setting it is not possible to train SVM_t because there are some categories that have no labeled examples in the target. Second, for our last experiment, SVM_s does not (and should not) change as the number of points in the target is increased. SVM_s is an SVM classifier trained using only source data. In the figure it is represented by the dotted cyan line, which remains constant (at around 42%) as the number of labeled target examples grows. As a third point, if we did have a metric to determine the adaptability of a (source,target) domain pair then we could simply choose to use the source data which is most adaptable to our target data. However, [15] provides a metric to determine a 'distance' between the source and target subspace, not necessarily an adaptability metric. The two might be correlated depending on the adaptation algorithm you use. Namely, if a (source,target) pair are 'close' you might assume they are easily adaptable. But, with our method we learn a transformation between the two spaces, so it's possible for a (source,target) pair to initially be very different according to the metric from [15], but be very adaptable. For example: in [15] the metric said that Caltech was most similar to Amazon, followed by Webcam, followed by Dslr. However, if you look at Table 1 you see that we received higher accuracy when adapting between dslr->caltech then from webcam->caltech. So even though webcam was initially more similar to caltech than dslr to caltech, we find that dslr is more 'adaptable' to caltech. Finally, the idea of using more definite domains or even different modalities is very interesting to us and is something we are considering for future work. We do feel that the experiments we present do justify our claims that our algorithm performs comparable or better than state of the art techniques and is simultaneously applicable to a larger variety of possible adaptation scenarios.
OVyHViMbHRm8c
Visual Objects Classification with Sliding Spatial Pyramid Matching
[ "Hao Wooi Lim", "Yong Haur Tay" ]
We present a method for visual object classification using only a single feature, transformed color SIFT with a variant of Spatial Pyramid Matching (SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an ensemble of linear regression (provided by LINEAR) to obtained state of the art result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead of dividing an image into K number of regions, a subwindow of fixed size is slide around the image with a fixed step size. For each subwindow, a histogram of visual words is generated. To obtained the visual vocabulary, instead of performing K-means clustering, we randomly pick N exemplars from the training set and encode them with a soft non-linear mapping method. We then trained 15 models, each with a different visual word size with linear regression. All 15 models are then averaged together to form a single strong model.
[ "visual objects classification", "spatial pyramid", "spatial pyramid matching", "spm", "sspm", "linear regression", "image", "subwindow", "models", "visual object classification" ]
https://openreview.net/pdf?id=OVyHViMbHRm8c
https://openreview.net/forum?id=OVyHViMbHRm8c
-MIjMM8a1GMYx
review
1,362,430,380,000
OVyHViMbHRm8c
[ "everyone" ]
[ "anonymous reviewer 9ba5" ]
ICLR.cc/2013/conference
2013
title: review of Visual Objects Classification with Sliding Spatial Pyramid Matching review: Summary of contributions: The paper presented a method to achieve a state-of-the-art accuracy on the object recognition benchmark Caltech101. The method used two major ingredients: 1. a sliding window of histograms (called sliding spatial pyramid matching) , 2. randomized vocabularies to generate different models and combine them. The authors claimed that, using only one image feature (transformed color SIFT), the method achieved really good results on Caltech101. Assessment of novelty and quality: Though the accuracy looks impressive, the paper offers limited research value to the machine learning community. The success is largely engineering, lacking insights that are informative to readers. The sliding window representation does not explore multiple scales. Therefore I don't understand why it is still called 'pyramid'. I hope the authors would try the methods on large-scale datasets like ImageNet. If good result obtained, then the work will be of great value to application.
OVyHViMbHRm8c
Visual Objects Classification with Sliding Spatial Pyramid Matching
[ "Hao Wooi Lim", "Yong Haur Tay" ]
We present a method for visual object classification using only a single feature, transformed color SIFT with a variant of Spatial Pyramid Matching (SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an ensemble of linear regression (provided by LINEAR) to obtained state of the art result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead of dividing an image into K number of regions, a subwindow of fixed size is slide around the image with a fixed step size. For each subwindow, a histogram of visual words is generated. To obtained the visual vocabulary, instead of performing K-means clustering, we randomly pick N exemplars from the training set and encode them with a soft non-linear mapping method. We then trained 15 models, each with a different visual word size with linear regression. All 15 models are then averaged together to form a single strong model.
[ "visual objects classification", "spatial pyramid", "spatial pyramid matching", "spm", "sspm", "linear regression", "image", "subwindow", "models", "visual object classification" ]
https://openreview.net/pdf?id=OVyHViMbHRm8c
https://openreview.net/forum?id=OVyHViMbHRm8c
mqGM7L9xJ-7Oz
review
1,362,272,400,000
OVyHViMbHRm8c
[ "everyone" ]
[ "anonymous reviewer 9dc6" ]
ICLR.cc/2013/conference
2013
title: review of Visual Objects Classification with Sliding Spatial Pyramid Matching review: This paper replaces the spatial pyramidal pooling in a spatial pyramid pooling by a sliding-window style pooling. By using this method and color SIFT descriptors, state-of-the-art results are obtained on the Caltech-101 dataset (83.5% accuracy). The contribution in this paper would be rather slight as is, but this is all the more true since it seems the idea of using sliding window pooling has already appeared in an older paper, with good results (they call the sliding windows 'components'): 'A Boosting Sparsity Constrained Bi-Linear Model for Object Recognition' IEEE Multimedia 2012 Chunjie Zhang Jing Liu ; Qi Tian ; Yanjun Han ; Hanqing Lu ; Songde Ma Simply using it with color SIFT descriptors does not constitute enough novelty for accepting this paper.
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
MUE4IYdQ_XMbN
review
1,360,788,360,000
GgtWGz7e5_MeB
[ "everyone" ]
[ "anonymous reviewer 5328" ]
ICLR.cc/2013/conference
2013
title: review of Joint Space Neural Probabilistic Language Model for Statistical Machine Translation review: The paper describes a Bayesian nonparametric HMM augmented with a hierarchical Pitman-Yor language model and slightly extends it by introducing conditioning on auxiliary inputs, possibly at each timestep. The observations are used for incorporating information from a separately trained model, such as LDA. In spite of the title and the abstract the paper has nothing to do with neural language models and very little with representation learning, as the author bizarrely uses the term NPLM to refer to the above n-gram HMM model. The model is evaluated as a part of a machine translation pipeline. This is a very poorly written paper. The quality of writing makes it at times very difficult to understand what exactly has been done. The paper makes no significant contributions from the machine learning standpoint, as what the author calls the 'n-gram HMM' is not novel, having been introduced by Blunsom & Cohn in 2011. The only material related to representation learning is not new either as it involves running LDA on documents. The rest of the paper is about tweaking a translation pipeline and is far too specialized for ICLR. Reference: Blunsom, Phil, and Trevor Cohn. 'A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction.' Proceedings of the 49th Annual Meeting of the ACL, 2011
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
A6lxA54Jzv1yo
review
1,362,021,540,000
GgtWGz7e5_MeB
[ "everyone" ]
[ "anonymous reviewer a273" ]
ICLR.cc/2013/conference
2013
title: review of Joint Space Neural Probabilistic Language Model for Statistical Machine Translation review: Author proposes 'n-gram HMM language model', which is inconsistent with the name of the paper. Also, the introduction is confusing and misleading. Overall the paper presents weak results. For example in Section 4 - the perplexity results are insignificantly better than from n-gram models, and most importantly are not reproducible: it is not even mentioned what is the amount of the training data, what is the order of n-gram models etc. Author uses irsltm, altough Srilm is cited too (giving it credit for n-gram language modeling, for some unknown reason); overall, many citations are unjustified, and unrelated to the paper itself (probably the only reason is to make everyone happy). 0.2 bleu improvement is generally supposed to be insignificant. I don't see any useful information in the paper that can help others to improve their work (rather opposite). Unless author can obtain better results (which I honestly believe is not possible, with the explored approach), I don't see a reason why this work should be published.
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
Ezy1znNS-ZwLb
review
1,361,986,980,000
GgtWGz7e5_MeB
[ "everyone" ]
[ "anonymous reviewer 5a64" ]
ICLR.cc/2013/conference
2013
title: review of Joint Space Neural Probabilistic Language Model for Statistical Machine Translation review: To quote the authors, this paper introduces a n-gram-HMM language model as neural probabilistic language model (NPLM) using the non-parametric Bayesian construction. This article is really confused and describes a messy mix of different approaches. At the end, it is very hard to understand what the author wanted to do and what he has done. This paper can be improved in many ways before it could be published: authors must clarify their motivations; the interaction between neural and HMM models could be describe more precisely; In the reading order: In the introduction, authors mistake the MERT process with the translation process. While MERT is used to tune the weights of a log-linear combination of models in order to optimize BLEU for instance, the NPLM are used as an additionnal model to re-rank nbest lists. Moreover, the correct citation for MERT is the ACL 2003 paper of F. Och. In section 2, the authors introduce a HMM language model. A lot of questions remain: What does the hidden states intend to capture ? What is the motivation ? What is the joint distribution associated to the graphical model of figure 1 ? How the word n-gram distributions are related to the hidden states ? In section 3, the authors enhance their HMM LM with an additional row of hidden states (joint space HMM). At this point the overall goal of the paper is for me totally unclear. For the following experimental sections, a lot of information on the set put is missing. Experiments cannot be reproduced given based on the content of that paper. For example, the intrinsic evaluation introduces ngram-HMM with one or two features. The very confused explanation of these features is provided further in the article. Authors do not describe the data-set (there are a lot of europarl version), nor the order of the LMs under consideration. In section 5, the following sentence puzzled me: 'Note that although this experiment was done using the ngram-HMM language model, any NPLM may be sufficient for this purpose. In this sense, we use the term NPLM instead of ngram-HMM language model.' Moreover, the first feature is derived from a NPLM, but how this NPLM is learnt, on which dataset, what are the parameters, the order of the model and how this feature is derived. I could not find the answers in this article. The rest of the paper is more and more unclear. At the end, authors shows a BLEU improvement of 0.2 on a system combination task. While I don't understand the models used, the gain is really small and I wonder if it is significant. For comparison's sake, MBR decoding usually provide a BLEU improvement of at least 0.2.
6ZY7ZnIK7kZKy
An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance
[ "Momiao Xiong", "Long Ma" ]
Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset
[ "genetic variants", "variants", "genomic", "information", "clinical significance", "clinical significance fast", "cheaper next generation", "technologies", "massive" ]
https://openreview.net/pdf?id=6ZY7ZnIK7kZKy
https://openreview.net/forum?id=6ZY7ZnIK7kZKy
x8ctSDlKbu8KB
review
1,362,277,800,000
6ZY7ZnIK7kZKy
[ "everyone" ]
[ "anonymous reviewer 1ff5" ]
ICLR.cc/2013/conference
2013
title: review of An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance review: Summary of the paper: This paper proposes a sparse extension of sufficient dimension reduction (the problem of findind a linear subspace so that the output and the input are conditionally independent given the projection of the input onto that subspace). The sparse extension is formulated through the eigenvalue formulation of sliced inverse regression. The method is finally applied to identifying genetic variants of clinical significance. Comments: -Other sparse formulations of SIR have been proposed and the new method should be compared to it (see two below) Lexin Li, Christopher J. Nachtsheim, Sparse Sliced Inverse Regression, Technometrics. Volume 48, Issue 4, 2006 Lexin Li. Sparse sufficient dimension reduction. Biometrika (2007) 94(3): 603-613 -In experiments, it would have been nice to see another method run on these data. -The paper appears out of scope for the conference
6ZY7ZnIK7kZKy
An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance
[ "Momiao Xiong", "Long Ma" ]
Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset
[ "genetic variants", "variants", "genomic", "information", "clinical significance", "clinical significance fast", "cheaper next generation", "technologies", "massive" ]
https://openreview.net/pdf?id=6ZY7ZnIK7kZKy
https://openreview.net/forum?id=6ZY7ZnIK7kZKy
rLP-LGuBmzyRt
review
1,362,195,240,000
6ZY7ZnIK7kZKy
[ "everyone" ]
[ "anonymous reviewer 34e0" ]
ICLR.cc/2013/conference
2013
title: review of An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance review: The paper describes the application of a supervised projection method (Sufficient Dimension Reduction - SDR) for a regression problem in bioinformatics. SDR attempts to find a linear projection space such that the response variable depends on the linear projection of the inputs. The authors make a brief presentation of SDR and formulate it as an optimal scoring problem. It takes the form of a constrained optimization problem which can be solved using an alternating minimization procedure. This method is then applied to prediction problems in Bioinformatics. The form and organization of the paper are not adequate. The projection method is only briefly outlined. Notations are not correct e.g. the same notations are used for random variables and data matrices, some of the notations or abbreviations are not introduced. The description of the applications remains extremely unclear. The abstract and the contribution do not correspond. The format of the paper is not NIPS format. The proposed method is an adaptation of existing work. The formulation of SDR as a constrained problem is not new. The contribution here might be a variant of the alternating minimization technique used for this problem. The application is only briefly sketched and cannot be really appreciated from this description. Pro Describes an application of SDR, better known in the statistical community, which is an alternative to other matrix factorization techniques used in machine learning. Cons Form and organization of the paper Weak technical contribution - algorithmic and applicative
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
BAmMaGEF72a0w
review
1,362,389,460,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 95fc" ]
ICLR.cc/2013/conference
2013
title: review of A Nested HDP for Hierarchical Topic Models review: This paper presents a novel variant of the NCRP process that overcomes the latter's main limitation, namely, that a document necessarily has to use topics from a specific path in the tree. This is accomplished by combining ideas from HDP with the NCRP process, where the entire nCRP tree is replicated for each document where a sample from each DP at each node of the original tree is used as a shared base distribution for each document's own DP. The idea is novel and is an important contribution in the area of unsupervised large scale text modeling. Although the paper is strong on novelty, it seems to be incomplete in terms of presenting any evidence that the model actually works and is better than the original NCRP model. Does it learn better topics than nCRP? Is the new model a better predictor of text? Does it produce a better hierararchy of topics than the original model? Does the better representation of documents translate into better performance on any extrinsic task? Without any preliminary answers to these questions, in my mind, the work is incomplete at best.
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
WZaI2aHNOvDz7
review
1,362,389,640,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 95fc" ]
ICLR.cc/2013/conference
2013
review: no additional comments.
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
cBZ06aJhuH6Nw
review
1,362,389,580,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 95fc" ]
ICLR.cc/2013/conference
2013
review: no additional comments.
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
s3Zn3ZANM4Twv
review
1,362,170,460,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 7555" ]
ICLR.cc/2013/conference
2013
title: review of A Nested HDP for Hierarchical Topic Models review: The paper introduces a natural extension to the nested Chinese Restaurant process, where the main limitation was that a single path for the tree (from the root to a leaf) is chosen for each individual document. In this work, a document specific tree is drawn (with associated switching probabilities) which is then used to generate words in the document. Consequently, the words can represent very different topics not necessarily associated with the same path in the tree. Though the work is clearly interesting and important for the topic modeling community, the workshop paper could potentially be improved. The main problem is clearly the length of the submission which does not provide any kind of details (less than 2 pages of content). Though additional information can be found in the cited arxiv paper, I think it would make sense to include in the workshop paper at least the comparison in terms of perplexity (showing that it substantially outperforms nCRP) and maybe some details on efficiency of inference. Conversely, the page-long Figure 2 could be reduced or removed to fit the content. Overall, the work is quite interesting and seems to be a perfect fit for the conference. Given that an extended version is publicly available, I do not think that the above comments are really important. Pros: -- a natural extension of the previous model which achieves respectable results on standard benchmarks (though results are not included in the submissions) Cons: -- a little more information about the model and its performance could be included even in a 3-page workshop paper.
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
NFxrNAiI-clI8
review
1,361,169,180,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "anonymous reviewer 1860" ]
ICLR.cc/2013/conference
2013
title: review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks review: This paper is by the group that did the first large-scale speech recognition experiments on deep neural nets, and popularized the technique. It contains various analysis and experiments relating to this setup. Ultimately I was not really sure what was the main point of the paper. There is some analysis of whether the network amplifies or reduces differences in inputs as we go through the layers; there are some experiments relating to features normalization techniques (such as VTLN) and how they interact with neural nets, and there were some experiments showing that the neural network does not do very well on narrowband data unless it has been trained on narrowband data in addition to wideband data; and also showing (by looking at the intermediate activations) that the network learns to be invariant to wideband/narrowband differences, if it is trained on both kinds of input. Although the paper itself is kind of scattered, and I'm not really sure that it makes any major contributions, I would suggest the conference organizers to strongly consider accepting it, because unlike (I imagine) many of the other papers, it comes from a group who are applying these techniques to real world problems and is having considerable success. I think their perspective would be valuable, and accepting it would send the message that this conference values serious, real-world applications, which I think would be a good thing. -- Below are some suggestions for minor fixes to the paper. eq. 4, prime ( ') missing after sigma on top right. sec. 3.2, you do not explain the difference between average norm and maximum norm. What type of matrix norm do you mean, and what are the average and maximum taken over? after 'narrowband input feature pairs', one of your subscripts needs to be changed.
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
ySpzfXa4-ryCM
review
1,362,161,880,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "anonymous reviewer 778f" ]
ICLR.cc/2013/conference
2013
title: review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks review: * Comments ** Summary The paper uses examples from speech recognition to make the following points about feature learning in deep neural networks: 1. Speech recognition performance improves with deeper networks, but the gain per layer diminishes. 2. The internal representations in a trained deep network become increasingly insensitive to small perturbations in the input with depth. 3. Deep networks are unable to extrapolate to test samples that are substantially different from the training samples. The paper then shows that deep neural networks are able to learn representations that are comparatively invariant to two important sources of variability in speech: speaker variability and environmental distortions. ** Pluses - The work here is an important contribution because it comes from the application of deep learning to real-world problems in speech recognition, and it compares deep learning to classical state-of-the-art approaches including discriminatively trained GMM-HMM models, vocal tract length normalization, feature-space maximum likelihood linear regression, noise-adaptive training, and vector Taylor series compensation. - In the machine learning community, the deep learning literature has been dominated by computer vision applications. It is good to show applications in other domains that have different characteristics. For example, speech recognition is inherently a structured classification problem, while many vision applications are simple classification problems. ** Minuses - There is not a lot of new material here. Most of the results have been published elsewhere. ** Recommendation I'd like to see this paper accepted because 1. it makes important points about both the advantages and limitations of current approaches to deep learning, illustrating them with practical examples from speech recognition and comparing deep learning against solid baselines; and 2. it brings speech recognition into the broader conversation on deep learning. * Minor Issues - The first (unnumbered) equation is correct; however, I don't think that viewing the internal layers as computing posterior probabilities over hidden binary vectors provides any useful insights. - There is an error in the right hand side of the unnumbered equation preceding Equation 4: it should be sigma prime (the derivative), not sigma. - 'Senones' is jargon that is very specific to speech recognition and may not be understood by a broader machine learning audience. - The VTS acronym for vector Taylor series compensation is never defined in the paper. * Proofreading the performance of the ASR systems -> the performance of ASR systems By using the context-dependent deep neural network -> By using context-dependent deep neural network the feature learning interpretations of DNNs -> the feature learning interpretation of DNNs a DNN can interpreted as -> a DNN can be interpreted as whose senone alignment label was generated -> whose HMM state alignment labels were generated the deep models consistently outperforms the shallow -> the deep models consistently outperform the shallow This is reflected in right column -> This is reflected in the right column 3.2 DNN learns more invariant features -> 3.2 DNNs learn more invariant features is that DNN learns more invariant -> is that DNNs learn more invariant since the differences needs to be -> since the differences need to be that the small perturbations in the input -> that small perturbations in the input with the central frequency of the first higher filter bank at 4 kHz -> with the center frequency of the first filter in the higher filter bank at 4 kHz between p_y|x(s_j|x_wb) and p_y|x(s_j|x_nb -> between p_y|x(s_j|x_wb) and p_y|x(s_j|x_nb) Note that the transform is applied before augmenting neighbor frames. -> Note that the transform is applied to individual frames, prior to concatentation. demonstrated through a speech recognition experiments -> demonstrated through speech recognition experiments
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
eMmX26-PXaMJN
review
1,362,128,940,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "anonymous reviewer cf74" ]
ICLR.cc/2013/conference
2013
title: review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks review: The paper presents an analysis of performance of DNN acoustic models in tasks where there is a mis-match between training and test data. Most of the results do not seem to be novel, and were published in several papers already. The paper is well written and mostly easy to follow. Pros: Although there is nothing surprising in the paper, the study may motivate others to investigate DNNs. Cons: Authors could have been more bold in ideas and experiments. Comments: Table 1: it would be more convincing to show L x N for variable L and N, such as N=4096, if one wants to prove that many (9) hidden layers are needed to achieve top performance (I'd expect that accuracy saturation would occur with less hidden layers, if N would increase); moreover, one can investigate architectures that would have the same number of parameters, but would be more shallow - for example, first and last hidden layers can have N=2048, and the hidden layer in between can have N=8192 - this would be more fair to show if one wants to claim that 9 hidden layers are better than 3 (as obviously, adding more parameters helps and the current comparison with 1-hidden layer NN is completely unfair as input and output layers have different dimensionality, but one can apply other tricks there to reduce complexity - for example hierarchical softmax in the output layer etc.) 'Note that the magnitude of the majority of the weights is typically very small' - note that this is also related to sizes of the hidden layers; if hidden layers were very small, the weights would be larger (output of neuron is non-linear function of weighted sum of inputs; if there are 2048 inputs that are in range (0,1), then we can naturally expect the weights to be very small) Section 3 rather shows that neural networks are good at representing smooth functions, which is the opposite to what deep architectures were proposed for. Another reason to believe that 9 hidden layers are not needed. The results where DNN models perform poorly on data that were not seen during training are not really striking or novel; it would be actually good if authors would try to overcome this problem in a novel way. For example, one can try to make DNNs more robust by allowing some kind of simple cheap adaptation during test time. When it comes to capturing VTLN / speaker characteristics, it would be interesting to use longer-context information, either through recurrence, or by using features derived from long contexts (such as previous 2-10 seconds). Table 4 compares relative reductions of WER: however, note that 0% is not reachable on Switchboard. If we would assume that human performance is around 5-10% WER, then the difference in relative improvements would be significantly smaller. Also, it is very common that the better the baseline is, the harder it is to gain improvements (as many different techniques actually address the same problems). Also, it is possible that DNNs can learn some weak VTLN, as they typically see longer context information; it would be interesting to see an experiment where DNN would be trained with limited context information (I would expect WER to increase, but also the relative gain from VTLN should increase).
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
WWycbHg8XRWuv
review
1,362,989,220,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "Mike Seltzer" ]
ICLR.cc/2013/conference
2013
review: We’d like to thank the reviewers for their comments. We have uploaded a revised version of the paper which we believe addresses reviewers’ concerns as well as the grammatical issues and typos. We have revised the abstract and introduction to better establish the purpose of the paper. Our goal is to demonstrate that deep neural networks can learn internal representations that are robust to variability in the input, and that this robustness is maintained when large amounts of training data are used. Much work in DNNs has been on smaller data sets and historically, in speech recognition, large improvements observed on small systems usually do not translate when applied to large-scale state-of-the-art systems. In addition, the paper contrasts DNN-based systems and their “built in” invariance to a wide variety of variability, to GMM-based systems, where algorithms have been designed to combat unwanted variability in a source-specific manner, i.e. they are designed to address a particular mismatch, such as the speaker or the environment. We also believe there is also a practical implication of these results: algorithms for addressing this acoustic mismatch in speaker, environment, or other factors, which are standard and essential for GMM-based recognizers, become far less critical and potentially unnecessary for DNN-based recognizers. We think this is important for both setting future research directions and deploying large-scale systems. Finally, while some of the results have been published previously, we believe the inherent robustness of DNNs to such diverse sources of variability is quite interesting, and is a point that might allude readers unless these results are combined and presented together. We also want to point out that the analysis of sensitivity to the input perturbation and all of the results in Section 6 on environmental robustness are new and previously unpublished. We hope by putting together all these analyses and results in one paper we can provide some insights on the strengths and weaknesses of using a DNN for speech recognition when trained with real world data.
rtGYtZ-ZKSMzk
Tree structured sparse coding on cubes
[ "Arthur Szlam" ]
A brief description of tree structured sparse coding on the binary cube.
[ "cubes", "sparse coding", "sparse", "brief description", "tree", "binary cube" ]
https://openreview.net/pdf?id=rtGYtZ-ZKSMzk
https://openreview.net/forum?id=rtGYtZ-ZKSMzk
7ESq7YWfqMhHk
review
1,362,001,920,000
rtGYtZ-ZKSMzk
[ "everyone" ]
[ "anonymous reviewer 2f02" ]
ICLR.cc/2013/conference
2013
title: review of Tree structured sparse coding on cubes review: summary: This is a 3-page abstract only. It proposes a low-dimensional representation of data in order to impose a tree structure. It relates to other mixed-norm approaches previously proposed in the literature. Experiments on a binarized MNIST show how it becomes robust to added noise. review: I must say I found the abstract very hard to read and would have preferred a longer version to better understand how the model is different from prior work. It's not clear for instance how the proposed approach compares to other denoising methods. It's not clear neither what is the relation between tree-based decomposition and noise in MNIST. Finally, I didn't understand why the model was restricted to binary representations. All this simply says I failed to capture the essence of the proposed approach.
rtGYtZ-ZKSMzk
Tree structured sparse coding on cubes
[ "Arthur Szlam" ]
A brief description of tree structured sparse coding on the binary cube.
[ "cubes", "sparse coding", "sparse", "brief description", "tree", "binary cube" ]
https://openreview.net/pdf?id=rtGYtZ-ZKSMzk
https://openreview.net/forum?id=rtGYtZ-ZKSMzk
axSGN5lBGINJm
review
1,362,831,180,000
rtGYtZ-ZKSMzk
[ "everyone" ]
[ "anonymous reviewer fd41" ]
ICLR.cc/2013/conference
2013
title: review of Tree structured sparse coding on cubes review: The paper extends the widely known idea of tree-structured sparse coding to the Hamming space. Instead for each node being represented by the best linear fit of the corresponding sub-space, it is represented by the best sub-cube. The idea is valid if not extremely original. I’m not sure it has too many applications, though. I think it is more frequent to encounter raw data residing some Euclidean space, while using the Hamming space for representation (e.g., as in various similarity-preserving hashing techniques). Hence, I believe a more interesting setting would be to have W in R^d, while keeping Z in H^K, i.e., the dictionary atoms are real vectors producing best linear fit of corresponding clusters with binary activation coefficients. This will lead to the construction of a hash function. The out-of-sample extension would happen naturally through representation pursuit (which will now be performed over the cube). Pros: 1. A very simple and easy to implement idea extending tree dictionaries to binary data 2. For binary data, it seems to outperform other algorithms in the presented recovery experiment. Cons: 1. The paper reads more like a preliminary writeup rather than a real paper. The length might be proportional to its contribution, but fixing typos and putting a conclusion section wouldn’t harm. 2. The experimental result is convincing, but it’s rather andecdotal. I might miss something, but the author should argue convincingly that representing binary data with sparse tree-structured dictionary is interesting at all, showing a few real applications. The presented experiment on binarized MNIST digit is very artificial.
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
HH0nm6IT6SHZc
comment
1,363,042,920,000
3wmH3H7ucKwu0
[ "everyone" ]
[ "Christian Osendorfer" ]
ICLR.cc/2013/conference
2013
reply: Dear 3338, Thank you for your feedback. In order to give a comprehensive answer, we quote sentences from your feedback and try to respond appropriately. >>> It is not clear what the purpose of the paper is. We suggest that the way unsupervised feature learning methods are evaluated should be extended: A more direct evalution of the learnt representations without subsequent supervised algorithms, and not tied to the task of high-level object classification. >>> The ground truth correspondences of the dataset were found by >>> clustering the image patches to find correspondences. This is not how the description of [R1] with respect to the Ground Truth Data (section II in [R1]) reads. >>> In this paper, simple clustering methods were not >>> compared to such as kmeans ... We added a K-Means experiment to the new version of the paper. We run K-Means (with a soft threshold function) [R2] on the dataset, it performs worse than spGRBM. (This is mentioned in the new version 3 of the paper). >>> Additionally, training in a supervised way makes much more sense >>> for finding correspondences. This is not the question that we are asking. We deliberately avoid any supervised training because we want to investigate purely unsupervised methods. We are not trying to achieve any state-of-the-art results. >>> It is not clear from the paper alone what is considered at match >>> between descriptors We have added some text that describes how a false positive rate for a fixed true positive rate is computed. >>> The preprocessing of the image patches seems different for each >>> method. This could lead to wildly different scales of the input >>> pixels and thus the corresponding representations of the various >>> methods. Could you elaborate why this is something to consider in our setting? >>> In section 3.3 it is mentioned that it is surprising that L1 >>> normalization works better because sparsity hurts classification >>> typically. We don't say that 'sparsity hurts classification typically'. We say the exact opposite (that sparse representations are beneficial for classification) and give a reference to [R3], a paper that you also reference. We say that it is surprising that a sparse representation ('sparse' as produced by spGRBM, not by a normalization scheme) performs better in a distance calculation, because the general understanding is (to our knowledge) that sparse representations suffer more from the curse of dimensionality when considering distances. >>> However, the sparsity in the paper is directly before the distance >>> calculation, and not before being fed as input to a classifier which >>> is a different setup and would thus be expected to behave differently >>> with sparsity. This is the typical setup in which sparsity is found to >>> hurt classification performance because information is being thrown >>> away before the classifier is used. We don't understand what is meant here. Wasn't the gist of [R3] that a sparse encoding is key for good classification results? However, we think that the main point that we wanted to convey in the referred part of the paper was poorly presented. We tried to make the presentation of the analysis part better in the new version (arxiv version 3) of the paper. >>> ...does not appear to apply to a wide audience as other papers have >>> done a comparison of unsupervised methods in the past' Those comparisions are, as explained in the paper, done always in combination with a subsequent supervised classification algorithm on a high-level object classification task. We want to avoid exactly this setting. We think that the paper is relevant for researchers working on unsupervised (feature) learning methods and for researchers working in Computer Vision. A new version (arxiv version 3) of the paper is uploaded on March 11. [R1] M. Brown, G. Hua, and S. Winder. Discriminative learning of local image descriptors. [R2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. [R3] A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quantization.
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
llHR9RITMyCTz
review
1,362,057,120,000
OznsOsb6sDFeV
[ "everyone" ]
[ "anonymous reviewer e954" ]
ICLR.cc/2013/conference
2013
title: review of Unsupervised Feature Learning for low-level Local Image Descriptors review: This paper proposes to evaluate feature learning algorithms by using a low-level vision task, namely image patch matching. The authors compare three feature learning algorithms, GRBM. spGRBM and mcRBM against engineered features like SIFT and others. The empirical results unfortunately show that the learned features are not very competitive for this task. Overall, the paper does not propose any new algorithm and does not improve performance on any task. It does raise an interesting question though which is how to assess feature learning algorithms. This is a core problem in the field and its solution could help a) assessing which feature learning methods are better and b) designing algorithms that produce better features (because we would have better loss functions to train them). Unfortunately, this work is too preliminary to advance our understanding towards the solution of this problem (see below for more detailed comments). Overall quality is fairly poor: there are missing references, there are incorrect claims, the empirical validation is insufficient. Pros -- The motivation is very good. We need to improve the way we compare feature learning methods. -- The filters visualization is nice. Cons -- It is debatable whether the chosen task is any better for assessing the quality of feature learning methods. The paper almost suggested a better solution in the introduction: we should compare across several tasks (from low level vision like matching to high level vision like object classification). If a representation is better across several tasks, then it must capture many relevant properties of the input. In other words, it is always possible to tweak a learning algorithm to give good results on one dataset, but it is much more interesting to see it working well across several different tasks after training on generic natural images, for instance. -- The choice of the feature learning methods is questionable, why are only generative models considered here? The authors do mention that other methods were tried and worked worse, however it is hard to believe that more discriminative approaches work worse on the chosen task. In particular, knowing the matching task it seems that a method that trains using a ranking loss (learning nearby features for similar patches and far away features for distant inputs) should work better. See: H. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video. ICML 2009. -- The overall results are pretty disappointing. Feature learning methods do not outperform the best engineered features. They do not outperform even if the comparison is unfair: for instance the authors use 128 dimensional SIFT but much larger dimensionality for the learned features. Besides, the authors do not take into account time, neither the training time nor the time to extract these features. This would also be considered in the evaluation. More detailed comments: -- Missing references. It is not true that feature learning methods have never been assessed quantitatively without supervised fine tuning. On a low level vision task, I would refer to: Learning to Align from Scratch Gary Huang, Marwan Mattar, Honglak Lee, Erik Learned-Miller. In Advances in Neural Information Processing Systems (NIPS) 25, 2012. Another missing reference is 2011 Memisevic, R. Gradient-based learning of higher-order image features. International Conference on Computer Vision (ICCV 2011). and other similar papers where Memisevic trains features that relate pairs of image patches. --ROC curves should be reported at least in appendix, if not in the main text. -- I do not understand why SIFT results on tab 1 a) differs from those in tab. 1 b).
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
Hu7OueWCO4ur9
review
1,361,947,080,000
OznsOsb6sDFeV
[ "everyone" ]
[ "anonymous reviewer f716" ]
ICLR.cc/2013/conference
2013
title: review of Unsupervised Feature Learning for low-level Local Image Descriptors review: his paper proposes a dataset to benchmark the correspodence problem in computer vision. The dataset consists of image patches that have groundtruth matching pairs (using separate algorithms). Extensive experiments show that RBMs perform well compared to hand-crafted features. I like the idea of using itermediate evaluation metrics to measure the progress of unsupervised feature learning and deep learning. That said, comparing the methods on noisy groundtruth (results of other algorithms) may have some bias. The experiments could be made stronger if algorithms such as Autoencoders or Kmeans (Coates et al, 2011, An Analysis of Single-Layer Networks in Unsupervised Feature Learning) are considered. If we can consider the groundtruth as clean, will supervised learning a deep (convolutional) network using the groundtruth produce better results?
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
J5RZOWF9WLSi0
comment
1,363,043,100,000
llHR9RITMyCTz
[ "everyone" ]
[ "Christian Osendorfer" ]
ICLR.cc/2013/conference
2013
reply: Deare954, thank you for your detailed feedback. We don't argue that the chosen task should replace existing benchmarks. Instead, we think that it supplements these, because it covers aspects of unsupervised feature learning that have been ignored so far. Note that by avoiding any subsequent supervision we not only think of supervised fine tuning of the learnt architecture but rather no supervised learning on the representations at all (e.g. like it is still done in [R2]). This is hopefully clearer in version 3 of the paper, we removed words like 'refinement' and 'fine tuning'. Thank you for pointing out missing references [R1, R2, R3]. We added [R2, R4, R5] to the paper in order to avoid the impression we are not aware of these approaches (we think that R4 fits better than R1 and R5 better than R3). We were, but did not mention these approaches because they are (i) relying on a supervised signal and/or (ii) are concerned with high-level correspondences (we consider faces as high-level entities). Current work investigates some of these methods, because utilizing the available pairing information should be beneficial with respect to a good overall performance. We are not arguing that discriminative methods work worse on this dataset. However, in this paper we are not striving to achieve state-of-the-art results: We investigate a new benchmark for unsupervised learning and test how good existing unsupervised methods do. We tried to make the analysis part in version 3 of the paper more clearer. We don't think that our claims are incorrect: We manage to perform comparable to SIFT when the size of the representation is free. It is not clear if for standard distance computations a bigger representations (in particular a sparse one) is actually an advantage. We also manage to perform better than several well known compact descriptors when we binarize the learnt representations. We also don't think that the evaluation is insufficient. The time to extract the features will be clearly dominated by the SIFT keypoint detector, because computing a new representation given a patch is a sequence of matrix operations. Training times are added to the new version of the paper. ROC curves will be in a larger technical report that describes in more detail the performance of a bigger number of feature learning algorithms (both supervised and unsupervised) on this dataset. Thank you for pointing out a missing experiment, training on general natural image patches (not extracted around keypoints) and then evaluating on the dataset. We are trying to incorporate results for this experiment in the final version of the paper. It should also be very interesting to experiment with the idea of unsupervised alignment [R2], especially as every patch implicitly has already some general alignment information from its keypoint. In Table 1b, SIFT is not normalized and used as a 128 byte descriptor (in Table 1a a 128 double descriptor (with normalized entries) is used). A new version (arxiv version 3) of the paper is uploaded on March 11. [R1] H. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video. [R2] Gary Huang, Marwan Mattar, Honglak Lee, Erik Learned-Miller. Learning to Align from Scratch. [R3] Memisevic, R. Gradient-based learning of higher-order image features. [R4] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. [R5] J. Susskind, R. Memisevic, G. Hinton, and M. Pollefeys. Modeling the joint density of two images under a variety of transformations.
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
rH1Wu2q8W0ujI
comment
1,363,042,680,000
Hu7OueWCO4ur9
[ "everyone" ]
[ "Christian Osendorfer" ]
ICLR.cc/2013/conference
2013
reply: Dear f716, thank you for your feedback. We evaluated more models than shown in Table 1, but they perform not as good as spGRBM so we decided to leave those out (from the Table) in order to avoid clutter. The models are mentioned in section 3.5 of the paper (the arxiv version 2 of the paper). We are currently running experiments with deep convolutional networks to determine how much improvement supervision signals can achieve. We uploaded a new version (on March 11) that changes some bits of the presentation. We also evaluated K-Means on the dataset (it is mentioned under 'Other models', because its performance is below the one frome spGRBM).
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
3wmH3H7ucKwu0
review
1,361,968,260,000
OznsOsb6sDFeV
[ "everyone" ]
[ "anonymous reviewer 3338" ]
ICLR.cc/2013/conference
2013
title: review of Unsupervised Feature Learning for low-level Local Image Descriptors review: This paper is a survey of unsupervised learning techniques applied to the unsupervised task of descriptor matching. Various methods such as Gaussian RBMs, sparse RBMs, and mcRBMs were applied to image patches and the resulting feature vectors were used in a matching task. These methods were compared to standard hand-crafted descriptors such as SIFT, SURF, etc. Pros Provides a survey of descriptors for matching pairs of image patches. Cons It is not clear what the purpose of the paper is. The paper compares several learning algorithms on the task of what essentially seems like clustering image patches to find their correspondences. The ground truth correspondences of the dataset were found by clustering the image patches to find correspondences... In this paper, simple clustering methods were not compared to such as kmeans or sparse coding which are less complicated models than RBMs and are meant for finding correspondences. Additionally, training in a supervised way makes much more sense for finding correspondences. It is not clear from the paper alone what is considered at match between descriptors? Is it the distance being below a threshold, the pair of descriptors being closer than any other pair of descriptors, etc.? The preprocessing of the image patches seems different for each method. This could lead to wildly different scales of the input pixels and thus the corresponding representations of the various methods. In section 3.3 it is mentioned that it is surprising that L1 normalization works better because sparsity hurts classification typically. However, the sparsity in the paper is directly before the distance calculation, and not before being fed as input to a classifier which is a different setup and would thus be expected to behave differently with sparsity. This is the typical setup in which sparsity is found to hurt classification performance because information is being thrown away before the classifier is used. Novelty and Quality: This paper is not novel in that it is survey of prior work applied to matching descriptors. It is well written but does not appear to apply to a wide audience as other papers have done a comparison of unsupervised methods in the past, for example: - A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. In Proc. AISTATS, 2011. - A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quanti- zation. In Proc. ICML, 2011.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
mmLgxpNpu1xGP
comment
1,363,216,980,000
kGNDPAwn1jGUc
[ "everyone" ]
[ "Guillaume Alain, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: > It's interesting that in the classical CAE, there is an implicit contractive effect on g() via the side effect of tying the weights whereas in the form of the DAE presented, g() is explicitly made contractive via r(). Have you investigated the effective difference? Not really, no. The results that we have for general autoencoders r does not even assume that r is decomposable into two meaningful steps (encode, decode). However, in our experiments we found better results (due to optimization issues) with untied weights (and a contractive or denoising penalty on the whole of r(.)=decoder(encoder(.)) rather than just the encoder). We have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low. > Although in the caption, you mention the difference between upper/lower and left/right subplots in Fig 4, I would prefer those (model 1/model 2) to be labeled directly on the subplots, it would just make for easier parsing. The section with Figure 4 has been edited and we are now showing only two plots. We have made all the suggested changes regarding typos and form. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
kGNDPAwn1jGUc
review
1,362,214,560,000
-4IA4WgNAy4Wx
[ "everyone" ]
[ "anonymous reviewer f62a" ]
ICLR.cc/2013/conference
2013
title: review of What Regularized Auto-Encoders Learn from the Data Generating Distribution review: Many unsupervised representation-learning algorithms are based on minimizing reconstruction error. This paper aims at addressing the important questions around what these training criteria actually learn about the input density. The paper makes two main contributions: it first makes a link between denoising autoencoders (DAE) and contractive autoencoders (CAE), showing that the DAE with very small Gaussian corruption and squared error is actually a particular kind of CAE (Theorem 1). Then, in the context of the contractive training criteria, it answers the question 'what does an auto-encoder learn about the data-generating distribution': it estimates both the first and second derivatives of the log-data generating density (Theorem 2) as well as other various local properties of this log-density. An important aspect of this work is that, compared to previous work that linked DAEs to score matching, the results in this paper do not require the reconstruction function of the AE to correspond to the score function of a density, making these results more general. Positive aspects of the paper: * A pretty theoretical paper (for representation learning) but well presented in that most of the heavy math is in the appendix and the main text nicely presents the key results * Following the theorems, I like the way in which the various assumptions (perfect world scenario) are gradually pulled away to show what can still be learned about the data-generating distribution; in particular, the simple numerical example (which could be easily re-implemented) is a nice way to connect the abstractness of the result to something concrete Negative aspects of the paper: * Since the results heavily rely on derivatives with respect to the data, they only apply to continous data (extensions to discrete data are mentioned as future work) Comments, Questions -------- It's interesting that in the classical CAE, there is an implicit contractive effect on g() via the side effect of tying the weights whereas in the form of the DAE presented, g() is explicitly made contractive via r(). Have you investigated the effective difference? Minor comments, typos, etc -------------------------- Fig 2 - green is not really green, it's more like turquoise - 'high-capcity' -> 'high-capacity' - the figure makes reference to lambda but at this point in the paper, lambda is yet to be defined objective function for L_DAE (top of p4) - last term o() coming from the Taylor expansion is explicitly discussed in appendix (and perhaps obvious here) but is not explicitly defined in the main text Right before 3.2.4 'high dimensional <data> (such as images)' Although in the caption, you mention the difference between upper/lower and left/right subplots in Fig 4, I would prefer those (model 1/model 2) to be labeled directly on the subplots, it would just make for easier parsing.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
EEBiEfDQjdwft
comment
1,363,217,640,000
1WIBWMxZeG4UP
[ "everyone" ]
[ "Guillaume Alain, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: > I think this is quite an important result. even though limited to this specific type of model As argued in a previous response (to reviewer 4222), we believe that at least at a qualitative level the same is true in general of regularized auto-encoders. We copy here the response: 'We have worked on the denoising/contracting auto-encoders with squared error because we were able to prove our results with them, but we believe that other regularized auto-encoders (even those with discrete inputs) also estimate something related to the score, i.e., the direction in input space in which probability increases the most. The intuition behind that statement can be obtained by studying figure 2: the estimation of this direction arises out of the conflict between reconstructing training examples well and making the auto-encoder as constant (regularized) as possible.' We have added a brief discussion in the conclusion about how we believe these results could be extended to models with discrete inputs, following the tracks of ratio matching (Hyvarinen 2007). We have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low. > I find the experiment shown in Figure 4 somewhat confusing. We have addressed this concern that many of the reviewers had. The whole section 3.2.3 has been edited and we decided to remove two of the plots which may have introduced confusion. Reviewers seem to focus on the difference between the two models and wanted to know why the outcomes were different. They were only different because of the non-convexity of the problem and the dependance on initial conditions (along with the random noise used for training). At the end of the day, the point is that the vector field points in the direction of the energy gradient, and that is illustrated nicely by the two plots left (far and close distance). > Section 3.2.4. I am not clear what is the importance of this section. It seems to state the relationship between the score and reconstruction derivative. Are you referring to section 3.3 ? If you are indeed referring to section 3.2.4, the idea there is that it is possible to start the investigation from a trained DAE where the noise level for the training is unknown to us (but it is known by the person who trained the DAE). In that case, we would be in a situation where we the best that could be done was to recover the energy function gradient up to a scaling constant. > Is it possible to link these results and theory to other forms of auto-encoders, such as sparse auto-encoders or with different type of non-linear activation functions? It would be very useful to have similar analysis for more general types of auto-encoders too. See our first response above. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
1WIBWMxZeG4UP
review
1,362,368,160,000
-4IA4WgNAy4Wx
[ "everyone" ]
[ "anonymous reviewer 7ffb" ]
ICLR.cc/2013/conference
2013
title: review of What Regularized Auto-Encoders Learn from the Data Generating Distribution review: The paper presents a method to analyse how and what the auto-encoder models that use reconstruction error together with a regularisation cost, are learning with respect to the underlying data distribution. The paper focuses on contractive auto-encoder models and also reformulates denoising auto-encoder as a form of contractive auto-encoder where the contraction is achieved through regularisation of the derivative of reconstruction error wrt to the input data. The rest of the paper presents a theoretical analysis of this form of auto-encoders and also provides couple of toy examples showing empirical support. The paper is easy to read and the theoretical analysis is nicely split between the main paper and appendices. The details in the main paper are sufficient for the reader to understand the concept that is presented in the paper. The theory and empirical data show that one can recover the true data distribution if using contractive auto-encoders of the given type. I think this is quite an important result. even though limited to this specific type of model, quantitative analysis of generative capabilities of auto-encoders have been limited. I find the experiment shown in Figure 4 somewhat confusing. The text suggests that the only difference between the two models is their initial conditions and optimisation hyper parameters. Is the main reason due to initial conditions or hyper parameters? Which hyper parameters? Is the difference in initial condition just a different random seed or different type of initialisation of the network? I think this requires more in depth explanation. Is it normal to expect such different solutions depending on initial conditions? Section 3.2.4. I am not clear what is the importance of this section. It seems to state the relationship between the score and reconstruction derivative. Is it possible to link these results and theory to other forms of auto-encoders, such as sparse auto-encoders or with different type of non-linear activation functions? It would be very useful to have similar analysis for more general types of auto-encoders too.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
fftnhM9InbLMv
comment
1,363,217,640,000
CC5h3a1ESBCav
[ "everyone" ]
[ "Guillaume Alain, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: > It would be good to compare these plots with other regularizers and show that getting log(p) for contractive one is somehow advantageous. We have worked on the denoising/contracting auto-encoders with squared error because we were able to prove our results with them, but we believe that other regularized auto-encoders (even those with discrete inputs) also estimate something related to the score, i.e., the direction in input space in which probability increases the most. The intuition behind that statement can be obtained by studying figure 2: the estimation of this direction arises out of the conflict between reconstructing training examples well and making the auto-encoder as constant (regularized) as possible. Other regularizers (e.g. cross-entropy) as well as the challenging case of discrete data are in the back of our minds and we would very much like to extend mathematical results to these settings as well. We have added a brief discussion in the conclusion about how we believe these results could be extended to models with discrete inputs, following the tracks of ratio matching (Hyvarinen 2007). We have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low. > it would be good to know something not in the limit of penalty going to zero We agree. We did a few artificial data experiments. In fact, we ran the experiment shown in section 3.2.2 using values of lambda ranging from 10^-6 to 10^2 to observe the behavior of the optimal solutions when the penalty factor varies smoothly. The optimal solution degrades progressively into something comparable to what is shown in Figure 2. It becomes a series of increasing plateaus matching the density peaks. Regions of lesser density are used to 'catch up' with the fact that the reconstruction function r(x) should be relatively close to x. > Figure 4. - 'Top plots are for one model and bottom plots for another' - what are the two models? It would be good to specify this in the figure, e.g. denosing autoencoders with different initial conditions and parameter settings. We have addressed this concern that many of the reviewers had. The whole section 3.2.3 has been edited and we decided to remove two of the plots which may have introduced confusion. Reviewers seem to focus on the difference between the two models and wanted to know why the outcomes were different. They were only different because of the non-convexity of the problem and the dependance on initial conditions (along with the random noise used for training). At the end of the day, the point is that the vector field points in the direction of the energy gradient, and that is illustrated nicely by the two plots left (far and close distance). > Section 3.2.5 is important and should be written a little more clearly. We have reworked that section (now identified as 3.2.6), to emphasize the main point: whereas Vincent 2011 showed that denoising auto-encoders with a particular form estimated the score, our results extend this to a very large family of estimators (including the non-parametric case). The section also shows how to interpret Vincent's results so as to show that any auto-encoder whose reconstruction function is a derivative of an energy function can be shown to estimate a score. Instead, the rest of our paper shows that we achieve an estimator of the score even without that strong constraint on the form of the auto-encoder. > I would suggest deriving (13) in the appendix directly from (11) without having the reader recall or read about Euler-Lagrange equations We must admit to not having understood the hints that you have given us. If indeed there was such a way to, as you say, spare the reader the headaches of Euler-Lagrange, we agree that it would be an interesting approach. > You don't actually derive formulas the second moments in the appendix like you do for the first moment, you mean they can similarly be derived? Yes, an asymptotic expansion can be derived in a similar way for the second moment. That derivation is 2 to 3 times longer and is not very useful in the context of this paper. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
CC5h3a1ESBCav
review
1,362,321,540,000
-4IA4WgNAy4Wx
[ "everyone" ]
[ "anonymous reviewer 4222" ]
ICLR.cc/2013/conference
2013
title: review of What Regularized Auto-Encoders Learn from the Data Generating Distribution review: This paper shows that we can relate the solution of specific autoencoder to the data generating distribution. Specifically solving for general reconstruction function with regularizer that is the L2 penalty of reconstruction contraction relates the reconstruction function derivative of the data probability log likelihood. This is in the limit of small regularization. The paper also shows that in the limit of small penalty this autoencoder is equivalent to denoising autoencoder with small noise. Section 3.2.3: You get similar attractive behavior using almost any autoencoder with limited capacity. The point of your work is that with the specific form of regularization - square norm of contraction of r - the r(x)-x relates to derivative of log probability (proof seem to require it - it would be interesting to know what can be said about other regularizers). It would be good to compare these plots with other regularizers and show that getting log(p) for contractive one is somehow advantageous. Otherwise this section doesn't support this paper in any way. As authors point out, it would be good to know something not in the limit of penalty going to zero. At least have some numerical experiments, for example in 1d or 2d. Figure 4. - 'Top plots are for one model and bottom plots for another' - what are the two models? It would be good to specify this in the figure, e.g. denosing autoencoders with different initial conditions and parameter settings. Section 3.2.5 is important and should be written a little more clearly. I would suggest deriving (13) in the appendix directly from (11) without having the reader recall or read about Euler-Lagrange equations, and it might actually turn out to be simpler. Differentiating the first term with r(x) gives r(x)-x. For the second term one moves the derivative to the other size using integration by parts (and droping the boundary term) and then just applying it to the product p(x)dr/dx resulting in (13). Minor - twice you say in the appending that the proof is in the appendinx (e.g. after statement of theorem 1) The second last sentence in the abstract is uncomfortable to read. This is probably not important, but can we assume that r given by (11) actually has a taylor expansion in lambda? (probably, but in the spirit of prooving things). You don't actually derive formulas the second moments in the appendix like you do for the first moment, you mean they can similarly be derived?
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
x9pbTj7Nbg9Qs
review
1,361,902,200,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: This is a cool investigation in a direction that I find fascinating, and I only have two remarks about minor points made in the paper. * Regarding the energy-based interpretation (that reconstruction error can be thought of as an energy function associated with an estimated probability function), there was a recent result which surprised me and challenges that view. In http://arxiv.org/abs/1211.4246 (What Regularized Auto-Encoders Learn from the Data Generating Distribution), Guillaume Alain and I found that denoising and contractive auto-encoders (where we penalize the Jacobian of the encoder-decoder function r(x)=decode(encode(x))) estimate the *score* of the data generating function in the vector r(x)-x (I should also mention Vincent 2011 Neural Comp. with a similar earlier result for a particular form of denoising auto-encoder where there is a well-defined energy function). So according to these results, the reconstruction error ||r(x)-x||^2 would be the magnitude of the score (derivative of energy wrt input). This is quite different from the energy itself, and it would suggest that the reconstruction error would be near zero both at a *minimum* of the energy (near training examples) AND at a *maximum* of the energy (e.g. near peaks that separate valleys of the energy). We have actually observed that empirically in toy problems where one can visualize the score in 2D. * Regarding the comparison in section 5.1 with the contractive auto-encoder, I believe that there is a correct but somewhat misleading statement. It says that the contractive penalty costs O(d * d_h) to compute whereas the saturating penalty only costs O(d_h) to compute. This is true, but since computing h in the first place also costs O(d * d_h) the overhead of the contractive penalty is small (it basically doubles the computational cost, which is much less problematic than multiplying it by d as the remark could lead a naive reader to believe).
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
__krPw9SreVyO
review
1,363,749,480,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Ross Goroshin" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their constructive comments. A revised version of the paper has been submitted to arXiv and should be available shortly. In addition to minor corrections and additions throughout the paper, we have added three new subsections: (1) a potential extension of the SATAE framework to include differentiable functions without a zero-gradient region (2) experiments on the CIFAR-10 dataset (3) future work. We have also expanded the introduction to better motivate our approach.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
UNlcNgK7BCN9v
review
1,363,840,020,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Ross Goroshin" ]
ICLR.cc/2013/conference
2013
review: The revised paper is now available on arXiv.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
zOUdY11jd_zJr
review
1,362,593,760,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "anonymous reviewer 5bc2" ]
ICLR.cc/2013/conference
2013
title: review of Saturating Auto-Encoder review: Although this paper proposes an original (yet trivial) approach to regularize auto-encoders, it does not bring sufficient insights as to why saturating the hidden units should yield a better representation. The authors do not elaborate on whether the SATAE is a more general principle than previously proposed regularized auto-encoders(implying saturation as a collateral effect) or just another auto-encoder in an already well crowded space of models (ie:Auto-encoders and their variants). In the last years, many different types of auto-encoders have been proposed and most of them had no or little theory to justify the need for their existence, and despite all the efforts engaged by some to create a viable theoretical framework (geometric or probabilistic) it seems that the effectiveness of auto-encoders in building representations has more to do with a lucky parametrisation or yet another regularization trick. I feel the authors should motivate their approach with some intuitions about why should I saturate my auto-encoders, when I can denoise my input, sparsify my latent variables or do space contraction? It's worrisome that most of the research done for auto-encoders has mostly focused in coming up with the right regularization/parametrisation that would yield the best 'filters'. Following this path will ultimately make the majority of people reluctant to use auto-encoders because of their wide variety and little knowledge about when to use what. The auto-encoder community should backtrack and clear the intuitive/theoretical noise left behind, rather than racing for the next new model.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
NNd3mgfs39NaH
review
1,362,361,200,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "anonymous reviewer 5955" ]
ICLR.cc/2013/conference
2013
title: review of Saturating Auto-Encoder review: This paper proposes a novel kind of penalty for regularizing autoencoder training, that encourages activations to move towards flat (saturated) regions of the unit's activation function. It is related to sparse autoencoders and contractive autoencoders that also happen to encourage saturation. But the proposed approach does so more directly and explicitly, through a 'complementary nonlinerity' that depends on the specific activation function chosen. Pros: + a novel and original regularization principle for autoencoders that relates to earlier approaches, but is, from a certain perspective, more general (at least for a specific subclass of activation functions). + paper yields significant insight into the mechanism at work in such regularized autoencoders also clearly relating it to sparsity and contractive penalties. + provides a credible path of explanation for the dramatic effect that the choice of different saturating activation functions has on the learned filters, and qualitatively shows it. Cons: - Proposed regularization principle, as currently defined, only seems to make sense for activation functions that are piecewise linear and have some perfectly flat regions (e.g. a sigmoid activation would yield no penalty!) This should be discussed. - There is no quantitative measure of the usefulness of the representation learned with this principle. The usual comparison of classification or denoising performance based on the learned features, with those obtained with other autoencoder regularization principles would be a most welcome addition.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
MAHULigTUZMSF
comment
1,363,043,520,000
pn6HDOWYfCDYA
[ "everyone" ]
[ "Sixin Zhang" ]
ICLR.cc/2013/conference
2013
reply: 'complementary nonlinerity' is very interesting, it makes me think of wavelet, transforming autoencoder. one question i was asking is how to make use of the information that's 'thrown' away (say after applying the nonlinearity, or the low path filter), or maybe those information are just noise? in saturating AE, the complementary nonlinerity is the residue of the projection (formula 1). What's that projective space? why the projection is defined elementwise (cf. softmax -> simplex)? how general can the non-linearity be extended for general signal representation (say Scattering Convolution Networks) , and classfication. I am just curious ~
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
pn6HDOWYfCDYA
review
1,362,779,100,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Rostislav Goroshin, Yann LeCun" ]
ICLR.cc/2013/conference
2013
review: In response to 5bc2: the principle behind SATAE is a unification of the principles behind sparse autoencoders (and sparse coding in general) and contracting autoencoders. Basically, the main question with unsupervised learning is how to learn a contrast function (energy function in the energy-based framework, negative log likelihood in the probabilistic framework) that takes low values on the data manifold (or near it) and higher values everywhere else. It's easy to make the energy low near data points. The hard part is making it higher everywhere else. There are basically 5 major classes of methods to do so: 1. bound the volume of stuff that can have low energy (e.g. normalized probabilistic models, K-means, PCA); 2 use a regularizer so that the volume of stuff that has low energy is as small as possible (sparse coding, contracting AE, saturating AE); 3. explicitly push up on the energy of selected points, preferably outside the data manifold, often nearby (MC and MCMC methods, contrastive divergence); 4. build local minima of the energy around data points by making the gradient small and the hessian large (score matching); 5. learn the vector field of gradient of the energy (instead of the energy itself) so that it points away from the data manifold (denoising autoencoder). SATAE, just like contracting AE and sparse modeling falls in category 2. Basically, if you auto-encoding function is G(X,W), X being the input, and W the trainable parameters, and if your unregularized energy function is E(X,W) = ||X - G(X,W)||^2, if G is constant when X varies along a particular direction, then the energy will grow quadratically along that direction (technically, G doesn't need to be constant, but merely to have a gradient smaller than one). The more directions G(X,W) has low gradient, the lower the volume of stuff with low energy. One advantage of SATAE is its extreme simplicity. You could see it as a version of Contracting AE cut down to its bare bones. We can always obfuscate this simple principle with complicated math, but how would that help? At some point it will become necessary to make more precise theoretical statements, but for now we are merely searching for basic principles.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
BSYbBsx9_5Suw
review
1,361,946,900,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "anonymous reviewer 3942" ]
ICLR.cc/2013/conference
2013
title: review of Saturating Auto-Encoder review: This paper proposes a regularizer for auto-encoders with nonlinearities that have a zegion with zero-gradient. The paper mentions three nonlinearities that fit into that category: shrinkage, saturated linear, rectified linear. The regularizer basically penalizes how much the activation deviates from saturation. The insight is that at saturation, the unit conveys less information compared to when it is in a non-saturated region. While I generally like the paper, I think it could be made a lot stronger by having more experimental results showing the practical benefits of the nonlinearities and their associated regularizers. I am particularly interested in the case of saturated linear function. It will be interesting to compare the results of the proposed regularizer and the sparsity penalty. More concretely, f(x) = 1 would incur some loss under the conventional sparsity; whereas, the new regularizer does not. From the energy conservation point of view, it is not appealing to maintain the neuron at high activation, and the new regularizer does not capture that. But it may be the case that, for a network to generalize, we need to only restrict the neurons to be in the saturation regions. Any numerical comparisons on some classification benchmarks would be helpful. It would also be interesting that the method is tested on a classification dataset to see if it makes a different to use the new regularizers.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
fjB3fGG430jr7
review
1,392,105,960,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "anonymous reviewer 67e9" ]
ICLR.cc/2014/conference
2014
title: review of On the number of inference regions of deep feed forward networks with piece-wise linear activations review: This is a very interesting and relevant paper, attempting to prove that a deep neural net composed of rectified linear units and a linear output layer is potentially significantly more powerful than a single layer net with the same number of units. A strength of the paper is its constructive approach, building up an understanding of the expressiveness of a model, in terms of the number of regions it represents. It is also notable that the authors pull in techniques from computational geometry for this construction. But there are several problems with the paper. The writing is unclear, and overall the paper feels like a preliminary draft, not ready for prime time. The introduction can be tightened up. A more significant comment concerns the important attempt to give some intuition, at the top of page 3. The paragraph starting with 'Specifically' doesn't make sense to me. How can all but one hidden unit be 0 for different intervals along the real line? Each hidden unit will be active above (or below) its threshold value, so many positive ones will be active at the same time. We could compose two hidden units to construct one that is only active within an interval in R, but I don't see how to do this in one layer. I must be missing something simple here. I think the basic intuition is that a higher-level unit can act as an OR of several lower level regions, and gain expressivity by repeating its operation on these regions, is the idea. But the construction is not clear. Also, one would expect that the ability to represent AND operations would also lead to significant expressivity gains. Is this also part of the construction? These basic issues should be clarified. In addition, it would be very helpful to have some concrete example of a function that can be computed by a deep net of the sort analyzed here, which cannot be computed by a shallow net of the same size. As it stands the characterization of the difference between the deep and equivalent one-layer network is too abstract to be very compelling. I also found the proof of Theorem 8 very hard to understand. This is not a key problem, as the authors do a good job building up to this main theorem, in sections 3 and 4. But it does mean that I am not confident that the proof is correct. Finally, I would recommend exploring the relationship between the ideas in this paper and the extensive work in circuit complexity that deals with multi-level circuits, for example the paper by Hajnal et al on 'Threshold circuits of bounded depth'.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
8nQMn2_oFgvXP
review
1,390,216,500,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
review: David, thanks a lot for your comments. We will definitely consider them in the next version of the paper. I have to wonder however, which version of the paper did you look at? Version 2 (which is online since 6th Jan) does not have an equation 13, while the first draft did. My hope is that the version 2, that is online, already answers most of your questions. Note that we have a different construction that changes much of the flow of the paper as well as the final result. I would quickly try to summarize some specific answers to your questions: (1) Regarding the scope of the paper. We do not generalize Zaslavsky's Theorem to scenarios that arise in deep models, as you suggest. For a single layer rectifier MLP it turns out that each hidden unit partitions the space using a hyperplane. This means that the number of regions we have is the number of regions you can get from an arrangement of $n$ hyperplanes. The answer to this question is given by Zaslavsky's Theorem. This offers as an upper bound to the maximal number of regions a single hidden layer model has. To make our point (that deep models are more efficient) we only need to show now that there exist deep models that result in more regions. We do so by constructing a specific deep model for which we can compute lower bound to the number of linear regions it generates. Asymptotically this lower bound is much larger than the maximal number of regions a single layer model has. (2) 'region'/'region of linearity'/'input space partition space' means the same thing. It is a connected region of the input space U, such that within this region every hidden unit either stays positive or is equal to 0. Or in other words, within this region the MLP is a linear function with respect to its input. (3) Regarding figure 2, we do use a new construction now that is depicted in more detail. (4) Proposition 8 and 9 have been replaced completely. Proposition 8 describes a lower bound on how many new regions you can get within a layer, given the number of regions you have generated up to that layer. Proposition 9 used proposition 8 to give a lower bound on the total number of regions of a deep model. (5) Regarding the statement about the output activation function. We simply meant to say that it is sufficient to study models with linear output activation function to get a sense for MLPs with sigmoids or softmax output activations. To see this do the following. Consider you have some rectifier MLP with sigmoids at the output layer, and some target function $f$. Let $s$ stand for the sigmoid activation function. To get a sense of how well we can approximate $f$, we can look at how well we can approximate $s^{-1}(f)$ with a rectifer MLP that has a linear output activation function.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
GqWRGvurSDqeX
review
1,390,529,820,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
review: Your point is well taken and we are well aware of it, motivating current work that would provide a stronger bound. Within that new construction, even for reasonably models like 3 layers with only 2 n_0 units on each layer, we still get more regions in the deep model than in the shallow one. In that new theorem we count every region of the deep model. In the proof presented in the paper, we only look at a lower bound on the maximal number of regions for the deep model, a bound that is not necessarily tight. The reason is that we only count certain regions and ignore the rest. Unfortunately counting all regions for the deep model construction given in the paper is difficult. The second comment we would like to make is that we recover a high ratio of n to n_0 if the data live near a low-dimensional manifold (effectively like reducing the input size n_0). One-layer models can reach the upper bound of regions only by spanning all the dimensions of the input. That is, for any subspace of the input we can not concentrate most of the regions in that subspace. If, as commonly assumed, data live near a lower dimensional manifold, then we care only about the number of regions we can get in the directions of that manifold. One way of thinking about it is if you do a PCA on your data. You will have a lot of directions (say on MNIST) where few variations are seen in the data (and if you see some, it is mostly due to noise that you want to ignore). In such a situation you care about how many regions you have within the directions in which the data do change. In such situations n >> n_0, making the proof of the paper more relevant. Lastly, we'll argue that the paper is making an important contribution to the asymptotic case but covers neural network architectures (with rectifiers) that were not previously covered. The paper is showing an important representational advantage from using deep models versus shallow ones, at least asymptotically. This kind of result is important to motivate the importance of deep models. Although there are previous results with the same objective, they have not been with the kind of commonly used non-linearity (rectified linear units) that we are able to cover here.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
c_ej7ww5zf_BP
comment
1,392,071,280,000
n6U96C27ST6iZ
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
reply: Thank you for your comments Reviewer 355b. Regarding the number of parameters, indeed it is not clear what is a fair measure of capacity for these models. We can easily show that our results hold even when we enforce the number of parameters to be the same. We've added a note about this in the discussion. Specifically, to do so, we can look at how fast the ratio between the number of regions and the number of parameters grows for deep versus shallow models. We can see that in this situation we have something of the form $Omega( {n/n_0}^{k-1} n^{n_0-2}$ for deep models and $O(k^{n_0} n^{n_0-1})$ for a shallow model. The ratio still grows exponentially faster with $k$ for deep models and polynomially with $n$ when $n_0$ is fixed. Further more, as a comment to your question, we argue that the number of linear region is a property that correlates with the representational power of the model, while the number of units (or parameters) is a measure of capacity. Our paper attempts to show that for the same capacity deep models can be more expressive. Regarding the generalization power of a deep model (versus a shallow one), this can be a tricky question to answer. Issues with the learning process only makes the answer even more complicated. Addressing all of these things together, while it is very important for deep learning, is much more than what our paper is trying to do. For now our analysis is limited to the representational power of a deep model, which is different from its generalization power. The question we are trying to answer is the following: 'Consider the family of all possible shallow model of a certain size (capacity) and then pick from this family of functions the one that is closes to some function $f$. Do the same for the family of deep models of same capacity. Which of these two picked models are doing a better job at approximating $f$ ?' Of course if we do not bound the capacity within each family the answer would that both approximate $f$ arbitrarily well and then we can not distinguished between them. This is true because we know both are universal approximators. Generalization, for piece-wise linear deep models, comes from the fact that while we can have more linear regions, these linear regions are restricted in some sense. Deep models rely on symmetry, repeating the partition done on some higher layer in different regions of the input. That means that the functions they can represent efficiently are highly structured, even though they look arbitrarily complicated. In other words, deep models can approximate much better then shallow models only certain families of functions (those that have a particular structure). If we try to approximate white noise, because it has no structure, deep models will not fare better than shallow ones. Analyzing formally this idea is a very interesting direction that we are considering for future work. And as a final note, we have submitted to arxiv a new version of the paper. It will be available starting with Tue, 11 Feb 2014 01:00:00 GMT. Let us quickly summarize the main changes: We have added section 4.1 which is a construction that only works for $n=2n_0$. In this construction we pair the hidden units at each layer, except the last one, and make each pair to behave as the absolute value of some coordinate. This will make each quadrant of the input space of the layer to the first quadrant, where the input space of the layer is the image of the input layer through all the layers below. However we partition the first quadrant in the higher layer, this partition will be repeated in all the other quadrants resulting in $Omega(2^{n_0(k-1)} n^{n_0})$ linear regions. This bound shows that deep models can be more efficient than shallow ones even for a small number of layers (like 2). We have added paragraph (paragraph 2 in the conclusion) that talks about forcing the models to have the same number of parameters rather than the same number of units. Additional changes from the last submission (Mon 27 Jan): We have added a paragraph (paragraph 3) talking about how $n_0$ is affected by the observation that, in real tasks, data might live near a manifold of much lower dimension. We have added to paragraphs in the introduction (paragraph 3 and 4 on page 2) motivating our picked measure of flexibility, namely the number of linear regions.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
dl7nPGGwxql6h
review
1,389,921,060,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "David Krueger" ]
ICLR.cc/2014/conference
2014
review: I like the topic of this paper, but I found it very difficult to read. Although I don't have many specific suggestions along these lines, I feel like it could probably be made much easier to understand if the proofs were introduced with more simple English explanations of how the proofs work. It seems like the main issue is generally showing how Zaslavsky's Theorem can be applied to specific scenarios that arise using deep networks. I think you should define what you mean by a 'region' or 'region of linearity' or 'input space partition region' and use one term consistently (I think these are all the same thing?) In figure 2, I would add plots of what lower levels look like, so we can see how the final plot emerges. I don't follow the proofs of Proposition 8 or 9. In proposition 8, I think the first summation should be up to n_1 and summing n_2 choose k? I would move the definitions of big-O, Omega and Theta notation to the preliminaries. You need to add Zaslavsky to the citations. I think equation (13) has a typo. Should 2n_0 be 2^{n_0}? I don't follow the part about how linearity is not a restriction in the Discussion; in fact, I'm not sure exactly what is meant by that statement. Little edits: proof of lemma 4: 'the regions of linear behavior [...] ARE' below equation (3) 'the first layer has TWO operational modes' later that paragraph: 'gradient equal to vr_i' should just be r_i, I think. Proposition 6: your proof sketch only demonstrates the formula for r(A), not b(A); you should say that. Section 4: you define but never use b(n_0,n_1). You don't specify, but I assume that r and u mentioned below are the r(n_0,n_1), u(n_0,n_1) you define. I would use n = n_1 and d = n_0 in this definition, for clarity. Discussion: say 'e.g.' or 'for example', but not 'for e.g.'
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
n6U96C27ST6iZ
review
1,391,861,220,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "anonymous reviewer 355b" ]
ICLR.cc/2014/conference
2014
title: review of On the number of inference regions of deep feed forward networks with piece-wise linear activations review: This paper studies the representation power of deep nets with piecewise linear activation functions. The main idea is to show that deep net with the same number units can represent (in terms of generating linear regions) more complex mappings than the shallow model. This is an interesting paper: it leverages known results in arranging hyperplanes in the space and then cleverly show how those can be used to show how linear regions can be learnt by multiple layers. While the theoretical results seem right, I could not help but wondering whether the comparison is 'fair': using the same number of units does not necessarily imply the deep net is 'restricted' in any way -- in fact, the deep net has more parameters than the shallow model. Is not that enough to argue (at least qualitatively) that deep net must have more representation power than the shallow model? (Of course, the value of the analysis is to show more precisely how many regions there are.) Additionally, the learning process does not necessarily mean that the deep net indeed constructs that many regions --- thus, purely comparing the number of regions is unlikely to explain how well the model is able to generalize well than shallow model or how the model is prevented from overfitting. Nonetheless, the paper presents a novel direction to pursuit to instigate further research.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
qVWJL1t42vKik
review
1,390,448,460,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "Alexandre Dalyac" ]
ICLR.cc/2014/conference
2014
review: sorry if I'm wrong, but it seems to me that typically, k is never large, and if n[0] is only ever large when n is also large, such that n/n[0] = O(1). in those circumstances, is there a significant difference in the number of linear regions between the 2 architectures?
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
s5xX4vEkuAnIR
review
1,392,137,460,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "anonymous reviewer 2699" ]
ICLR.cc/2014/conference
2014
title: review of On the number of inference regions of deep feed forward networks with piece-wise linear activations review: The authors of this paper analyse feed-forward networks of linear rectifier units (RELUs) in terms of the number of regions in which they act linearly. They give an upper bound on the number of regions for networks with a single hidden layer based on known results in geometry, and then show how deeper networks can have a much larger number of regions by constructing examples. The constructions form the main novel technical contribution, and they seem non-trivial and interesting. Overall I think this is a good and interesting paper. It is well written with the notable exception of the proof of theorem 8 and the latter half of the introduction. In most spots, the math is precise and accessible (to me anyway), the results nicely broken into lemmas, and the diagrams are very useful for providing intuition. These results can be interpreted as separating networks with a single hidden layer from deep networks in terms of the types of functions they can efficiently compute. However, number of linear regions is a pretty abstract notion, and it isn't obvious what these results can say about the expressibility by neural nets of functions that we can actually write down. Do you know of any natural examples of functions that require a finite but super-exponential number of regions? Unfortunately, region counting can't say anything about the representability of functions defined on such input spaces of the form S^n0 where S is a finite set, since there are only |S|^n0 input values, and |S|^n0 << n^n0 = region upper bound. About Theorem 8: After hours trying to understand the proof of Theorem 8 I gave up. However, I was able to use Prop 7, and intuition provided from the diagrams, to prove a slightly different version of Thm 8 myself, and so I think the result is correct, and the proof is probably trying to describe basically the same thing I came up with (except my proof went from the top layer down, instead of the bottom layer up). So while I don't doubt the correctness of the statement of Thm 8, but the write-up of the proof of Thm 8 needs to be completely redone to be understandable and intuitive. I don't think you need to make it 100% formal (Prop 7 isn't completely formal either, but it's fine as is), but you need to make it possible to understand with a reasonable amount of effort. Detailed comments: --- Title: Please pick a different title. These are feed-forward networks so calling the regions 'inference regions' doesn't make sense. Abs: Why is it 'computational geometry' and not just 'geometry'? What is specifically computational about arrangements of hyperplanes? Page 2: Missing from the review of previous results about the power networks is all of the work done on threshold units (see the papers of Wolfgang Maass for example, or the seminal of Hajnal et al. proving lower bounds for shallow threshold networks). Unlike the single paper by Hastad et al. cited, none of these require the weights to be non-negative. Moreover, these results are hardly non-realistic, as neural networks with sigmoids can easily simulate thresholds, and under certain assumptions the reverse simulation can be done approximately and reasonably efficiently too. Also missing from this review is recent work of Montufar et al. and Martens et al. analysing the expressive power of generative models (RBMs). Beginning of page 3: I have a hard time following this high-level discussion. I think this doesn't belong in the introduction, as it is too long and convoluted. Instead, I think you should include such discussion as intuition about your formal constructions *as you give them*. The way it is written right now, the discussion tries to be intuitive, precise, and comprehensive, and it doesn't really succeed at being any of these. Page 3: You should formally define what you mean by 'hyperplane' and 'arrangement'. In particular, a hyperplane is the set of points defined by the equation, not the equation itself. And if an arrangement is taken to be a set of hyperplanes (as per the usual definition), then the statement in Prop 6 isn't formal (although its meaning is still obvious). In particular, how does a ball S 'intersect' with a set of hyperplanes? Do you mean that it intersects with the union of the hyperplanes in the arrangement? I know these are nit-picky points, but if you should try to be technically precise. Page 5: There is a missing reference here: 'Zaslavsky's theorem (?, Theorem A)' Page 5: You should explain the concept of general position precisely. I don't know what 'generic weights' is supposed to mean, the actual definition has to do with lack of colinearity. You might want to point out that any choice of hyperplanes can be infinitesimally perturbed so that they end up in general position. Page 6: 'Relative position' is never formally defined, and it's not immediately obvious what it means. Page 6: The explanation after the statement of Prop 6 is much clearer. Perhaps you should just prove this (stronger) statement directly, and not the fairly opaque and abstract statement made in Prop 6. Figure 3: Why are there 2 dotted circles instead of just 1? Page 7: 'arrangements 2-dimensional essentialization'? Page 7: '{0,...,n}, a < b' -> '{0,...,n} s.t. a < b' Page 7: What do you mean by 'independent groups of n0 units within the n units of a layer'? How can a unit be 'within' a unit? Page 7: What do you mean by an 'enumeration' of a 2-dimensional arrangement? Page 9: Below the statement of prop 7, it was suggested that the construction would be top down, where each group in a lower layer 'duplicates' the number of regions constructed by the layer above. But the construction given here seems to proceed bottom up... at least in the structure of the proof. This makes it less intuitive. Page 9: The proof starts out very confusingly. I wasn't able to follow the sentence 'Then we find...'. These 'groups' haven't been formally defined at this stage, only informally alluded to. And I have another question: how is Prop 7 actually used here? Merely to establish the existence of network where there is n/n0 regions that each only turn on for different groups Ii? Isn't it trivial to construct such a thing? i.e. take the input weights to the units in a given group to be all the same, and make sure the square n0xn0 matrix formed by taking the weight vector for each group is full-rank (e.g. the identity matrix)? I suspect the reason this doesn't work is that the dimensions would collapse to 1 since each unit in a group would behave identically. However, one could then perturb the final expanded weight matrix to get a general position matrix, so that the subspace associated with each group wouldn't collapse to dimension 1. Is there anything wrong with this? Page 9: Don't say 'decompose' here. 'Decompose' implies you already have weights and are decomposing them into factors. Instead, what you are doing is defining some weights to be a product of particular matrices. You should emphasize that it is a common V shared by all the i. This is easy to miss. Page 9: What does it mean for a linear map to be 'with Ii-coordinates equal to...'? The 'Ii-coordinates' are the inputs to this map? The outputs? Page 9: What is R_i^(1)? It is never defined. Is it different from the version without the superscript? Is it defined implicitly the first time it is used? If so, what is a 'region of activation values'? What is very confusing is that you use x to define this function, when I had the impression that x was for the original inputs only. This isn't consistent with the h notation you use in figure 1 for example. Page 9: You say that something 'passed through' rho^2 is the 'input' to the second layer. This is the input to the rectifier units without their weights (which I'm guessing are contained in rho^2)? Or is it these units with the 'V' factor part of their weights only, but not the 'U' part? This is all extremely confusing. Without careful definitions of your notation it requires a lot more work on the part of the reader to understand what is going on. Page 9: How can rho_i^(2)(R_i^(1)) be a 'subset' of an subspace that lives in R^(n1)? rho_i^(2)(R_i^(1)) is going to be a set of vectors living in R^(n0)! Page 10: What do you mean when you say that 'this arrangement is repeated once in each region'? What does it mean for an arrangement to be 'repeated in a region'? I feel like the proof becomes mostly a proof-by-diagram as this point. Maybe you should have started off with this kind of diagram and the intuition of 'duplicating regions', explaining how composing piece-wise linear functions can achieve this kind of thing (which is really the critical point that gets glossed over), and then proceeding to show that you could formally construct each 'piece' required to do this. And you should have done the construction starting at the top layer going down. Having reconstructed the proof in a way that I actually understood it, it seemed that one could also proof that one can (prod_{i=1}^{k-1} n_i)/2^{k-1} * sum_i=0^2 (n_k choose i) regions, which in some cases might be a lot larger than the expression you arrived at. Unlike your Thm 8 does, this version would actually need to use the fact the constructions are 2-dimensional in Prop 7. Page 11: The asymptotic analysis is just very routine and uninteresting computations and should be in the appendix. It breaks the flow of your paper. I would much prefer to see more detailed commentary about the implications of Thm 8.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
zhkyU33YqKzMj
review
1,390,216,500,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
review: David, thanks a lot for your comments. We will definitely consider them in the next version of the paper. I have to wonder however, which version of the paper did you look at? Version 2 (which is online since 6th Jan) does not have an equation 13, while the first draft did. My hope is that the version 2, that is online, already answers most of your questions. Note that we have a different construction that changes much of the flow of the paper as well as the final result. I would quickly try to summarize some specific answers to your questions: (1) Regarding the scope of the paper. We do not generalize Zaslavsky's Theorem to scenarios that arise in deep models, as you suggest. For a single layer rectifier MLP it turns out that each hidden unit partitions the space using a hyperplane. This means that the number of regions we have is the number of regions you can get from an arrangement of $n$ hyperplanes. The answer to this question is given by Zaslavsky's Theorem. This offers as an upper bound to the maximal number of regions a single hidden layer model has. To make our point (that deep models are more efficient) we only need to show now that there exist deep models that result in more regions. We do so by constructing a specific deep model for which we can compute lower bound to the number of linear regions it generates. Asymptotically this lower bound is much larger than the maximal number of regions a single layer model has. (2) 'region'/'region of linearity'/'input space partition space' means the same thing. It is a connected region of the input space U, such that within this region every hidden unit either stays positive or is equal to 0. Or in other words, within this region the MLP is a linear function with respect to its input. (3) Regarding figure 2, we do use a new construction now that is depicted in more detail. (4) Proposition 8 and 9 have been replaced completely. Proposition 8 describes a lower bound on how many new regions you can get within a layer, given the number of regions you have generated up to that layer. Proposition 9 used proposition 8 to give a lower bound on the total number of regions of a deep model. (5) Regarding the statement about the output activation function. We simply meant to say that it is sufficient to study models with linear output activation function to get a sense for MLPs with sigmoids or softmax output activations. To see this do the following. Consider you have some rectifier MLP with sigmoids at the output layer, and some target function $f$. Let $s$ stand for the sigmoid activation function. To get a sense of how well we can approximate $f$, we can look at how well we can approximate $s^{-1}(f)$ with a rectifer MLP that has a linear output activation function.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
_xtmg4TcjB_5l
comment
1,392,750,600,000
fjB3fGG430jr7
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
reply: Thank you for your comments Reviewer 67e9. We have carefully considered them and integrated them in the new version of the paper, which is now available on arxiv (v5). In what follows let us answer to some of your concerns. 'But there are several problems with the paper. The writing is unclear, and overall the paper feels like a preliminary draft, not ready for prime time.' We think the paper has improved steadily since the initial submission and we hope that, in the new version, it is clear and up to ICLR quality standards. The paper offers a new perspective on a hard question and we think that the presented ideas can be useful for addressing a variety of related problems. 'The introduction can be tightened up.' We shortened the Introduction. 'A more significant comment concerns the important attempt to give some intuition, at the top of page 3...' In our attempt to provide a simplest possible example of the mechanism behind our proof, we unfortunately made a mistake. You are right, with a single input unit it is not possible to networks for which distinct units are active at different input intervals in the way that it was claimed in that example. Thank you for pointing this out. We fixed the mistake. Proposition 7 (now Proposition 4) indicates the relevant conditions. 'I think the basic intuition is that a higher-level unit can act as an OR of several lower level regions...' Exactly, this is the intuition that we were trying to convey in that paragraph. This intuition builds the main mechanism behind our proof of the main theorem. We hope that with the changes made to the manuscript the construction is now clearer. 'Also, one would expect that the ability to represent AND operations would also lead to significant expressivity gains. ' The offered proof relies only on the OR operation, but one should be careful about what this means exactly. Specifically, we do not compute the OR between two values and provide the result as output of the layer. Instead, what this OR operations describes is that some particular output value can be obtained from various inputs input1 OR input2 OR input3, etc. In this context an AND operation does not make sense. What we are describing here is a function that is not injective, i.e., which has distinct domain values that are mapped to the same output value. Of course, the injectivity is lifted to the level of domain or input regions rather than individual input values. However you can see how AND becomes impossible to express in these terms. 'In addition, it would be very helpful to have some concrete example of a function that can be computed by a deep net of the sort analyzed here...' Thank you for the suggestion. We included a description of more intuitive classes of functions computable by rectifier models in Section 5, together with toy examples with 2-dimensional inputs. While Theorem focuses more on the asymptotic regime, the new construction given in Section 5 shows that there are classes of functions that can be computed far more efficiently by deep networks than by shallow ones, even if the number of layers of the deep networks is relatively small, say equal to 3 or 4. 'I also found the proof of Theorem 8 very hard to understand..' We completely overworked that proof, putting attention to the consistency of the notation and maintaining the mathematics precise. We extracted parts of the proof into propositions, in order to make the steps clearer. 'Finally, I would recommend exploring the relationship between the ideas in this paper and the extensive work in circuit complexity... ' Thank you. We will look carefully at that literature. There might be some interesting connections.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
26C7gYY01ohqX
comment
1,392,750,720,000
s5xX4vEkuAnIR
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
reply: We appreciate the detailed comments of Reviewer 2699. They were very helpful for preparing the present revision of the manuscript. In the following we address all comments of the reviewer and give description of the changes made to the manuscript. In response to the general comments we * Changed the title of the manuscript to ``On the number of response regions of deep feedforward networks with piecewise linear activations'' * Shortened the Introduction (and removed an erroneous example that was given there) * Completely overworked the proof of the former Theorem 8 (now Theorem 1). * Moved the asymptotic analysis to the appendix * Included a new section (Section 5) discussing tighter bounds for deep models. In the following we address the detailed comments. * ``Computational geometry'' refers to the study of algorithms using geometry. Here, using the word ``computational'' is just a matter of taste. Our motivation is that a neural network is a computational system and an algorithm (compute output of unit $i$ for $iin [n]$, sum the outputs of units $iin[n]$, etc.). * We included a reference to Hajnal's work in the Introduction. * We included pointers to the work of Montufar et al. and Martens et al. in the Introduction. * We removed the long discussion from the Introduction and decided, instead, to include an example (Example 1) in the vicinity of the main theorem (Theorem 1). * We included several definitions and worked on making our formulations more precise. * We corrected a missing reference to Zaslavsky's work. * We included the formal definition of ``general position'' and comments on infinitesimal perturbations. * We no longer use the expression ``relative position'' For clarity, in the previous manuscript, the definition was as follows: Two arrangements have the same relative position if they are combinatorially equivalent, or more formally, if there is a bijection of their intersection posets, where the intersection poset of an arrangement is the set of all nonempty intersection of its hyperplanes partially ordered by reverse inclusion. * We reformulated the former Proposition 6 in terms of scaling and shifting, moving technical details to the proof, and avoiding the use of the expression ``relative position''. * We corrected the former Figure 3, which now shows just 1 dotted circle instead of 2. * We explained the notion of ``essentialization'' in more detail. Proposition 4 describes the combinatorics of $n$-dimensional arrangements with $2$-dimensional essentialization; that is, arrangements of hyperplanes whose intersections with the span of their normal vectors build a $2$-dimensional arrangement (on the span of the normal vectors). * We corrected '${0,...,n}, a < b' -> '{0,...,n} s.t. a < b$'. * We improved our formulations, especially about ``independent groups of units'' and ``enumeration'' of the hyperplanes in an arrangement. * We made significant efforts in clarifying how the construction works, bottom to top. * We improved the formulations ``we find'', ``groups'', trying to make the arguments more formal and clearer. * The reviewer asked how Proposition 7 was used in the proof of the theorem. Using the same activation weights for a collection of units would cause them to behave identically. The entire collection of units would have an output of dimension at most one, which would not be useful for our proof. Perturbing the weights to produce a full dimensional matrix would work. A high level proof could be formulated in this way. We found it important to give an explicit choice of weights for which certain well defined properties hold, instead of relying only on high level arguments, in particular, because this allows us to verify the accuracy of our intuitions. In fact, our construction is stable, in the sense that small perturbations of the specified weights cause only small perturbations of the computed function. The resulting perturbed function has at least as many linear regions as the original one. * The word `decompose' was meant in the opposite way that `compose' is used for compositions of functions, $fcirc g$. Thanks for the comment, we tried to use more precise expressions. * The reviewer asked ``Page 9: What does it mean for a linear map to be 'with Ii-coordinates equal to...'? The 'Ii-coordinates' are the inputs to this map? The outputs? '' We tried to make this more precise in the revision. For clarity, the terminology is the standard one: A ``coordinate'' of a map $f : R^n o R^m; (x_1,ldots, x_n) mapsto (f_1(x_1,ldots, x_n), ldots, f_m(x_1,dots, x_n))$ is any of the functions $f_i: R^n o R; (x_1,ldots, x_n)mapsto f_i(x_1,ldots, x_n)$ for $i=1,ldots, m$. Given a subset $I$ of ${1,ldots, m}$, the $I$-coordinates of the map $f$ are the functions $f_i$ with $iin I$. For example, if $I={i_1,ldots, i_{|I|} }subseteq {1,ldots, m}$, we can consider the map defined by the $I$-coordinates of $f$, which is the map $f_I: R^n o R^{|I|}; (x_1,ldots, x_n) mapsto (f_{i_1}(x_1,ldots, x_n) , ldots, f_{i_{|I|}}(x_1,ldots, x_n) )$. * The reviewer asked ``What is $R_i^{(1)}$? It is never defined...''. We worked on better explaining the notation and using it uniformly. * The reviewer wrote ``Page 9: You say that something 'passed through' $ ho^2$ is the 'input' to the''... We improved the terminology. * Also ``Page 9: How can $ ho_i^(2)(R_i^{(1)})$ be a 'subset' of an subspace that lives...'' We overworked these parts as well. * The reviewer suggested a new construction for proving statements about deep models. We do not argue that our construction or our analysis yields the maximal number of regions of linearity. It merely demonstrates that deep models are exponentially more efficient than shallow models. In the revision we included a new construction of weights of deep rectifier networks (in Section 5), which shows tighter bounds for certain choices of layer widths. Other constructions exploiting higher dimensional versions of the former Proposition 7 are worth studying in the future, in order to arrive at yet tighter bounds. * We moved the asymptotic analysis to the appendix. In the Discussion we included comments about the number of linear regions computable per parameter.
bSaT4mmQt84Lx
On the number of inference regions of deep feed forward networks with piece-wise linear activations
[ "Razvan Pascanu", "Guido F. Montufar", "Yoshua Bengio" ]
This paper explores the complexity of deep feed forward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piece-wise linear functions based on computational geometry. We look at a deep (two hidden layers) rectifier multilayer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime as the number of units goes to infinity, if the shallow model has $2n$ hidden units and $n_0$ inputs, then the number of linear regions is $O(n^{n_0})$. A two layer model with $n$ number of hidden units on each layer has $Omega(n^{n_0})$. We consider this as a first step towards understanding the complexity of these models and argue that better constructions in this framework might provide more accurate comparisons (especially for the interesting case of when the number of hidden layers goes to infinity).
[ "number", "deep feed", "linear activations", "deep", "inference regions", "complexity", "framework", "units", "infinity", "linear presynaptic couplings" ]
https://openreview.net/pdf?id=bSaT4mmQt84Lx
https://openreview.net/forum?id=bSaT4mmQt84Lx
ttNb0MnzpZ0_v
review
1,392,750,720,000
bSaT4mmQt84Lx
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2014/conference
2014
review: We have posted a new revision (v5) of out manuscript. In this revision we address the reviewers' comments. The most important changes are: * We reduced the length of the introduction and moved some of the detailed descriptions or intuitions near the relevant propositions or theorems. * We fixed some shortcomings of an example that was given in the Introduction of the previous version of the manuscript. * We added missing formal definitions and worked on making our notation more consistent and rigorous. * We overworked the proof of the former Theorem 8 (now Theorem 1). We included new diagrams illustrating the steps of the proof. We also included an example (Example 1) illustrating how the components of the proof are put together in the proof. * We added a new section (Section 5) describing a construction of weights for which deep models exhibit more linear regions than shallow ones, even for a small number of hidden layers. In this section we also illustrate specific functions that can be represented with this choice of weights. * We formulated bounds in terms of the number of regions per parameter computable by rectifier networks. These bounds behave similarly to the bounds expressed in terms of the number of units, showing that deep models are exponentially more efficient than shallow models.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
Gg_aFHZkvdGNC
review
1,388,841,360,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "David Krueger" ]
ICLR.cc/2014/conference
2014
review: I have not read the paper that carefully. But the idea of the k-sparse autoencoder seems very similar to the Orthogonal Matching Pursuit (OMP-k) training and encoding used in Coates and Ng (http://www.stanford.edu/~acoates/papers/coatesng_icml_2011.pdf). The difference, it seems to me, is that OMP-k allows less than k units to be active, and also the scheduling idea (4.2.1), and the alpha multiplier for the encoding stage. Is there something else I am missing that distinguishes your approach? Otherwise, I would like to see comparisons to OMP-k, and to a simple threshold encoding approach as in Coates and Ng.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
01dx0NWMEZGjr
review
1,391,148,960,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
P-iz-xpWztV_s
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
l6rclPZecziqO
review
1,392,085,260,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "anonymous reviewer d08c" ]
ICLR.cc/2014/conference
2014
title: review of k-Sparse Autoencoders review: The authors propose an auto encoder with linear encoder and decoder, but with sparsity that keeps only k elements in the hidden layer nonzero. They show that it works as well or better then more complicated methods. Novelty: Simple but works Quality: Good Details: - The paper introduces a very simple idea which I am sure many people not only thought of but implemented, including me. However the main point here is that the authors actually made it work well and made a connection to a sparse coding algorithm. One of tricks of making it work seems to be to start with a large number of allowed nonzero elements and then decrease it, otherwise, many filters would not ever be used. - Is there a mistake in the algorithm box as presented x = Wz+b'. Shouldn't the z be replaced by something like z_Gamma where the latter is obtained from z by setting elements that are not in the group of k largest to zero? Because that's what the description in the rest of the paper implies, for example in 2.2. - Table - it would be good to explain that the net is in Table 3's caption.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
o1QJezhmJPem2
comment
1,389,824,640,000
ZpJXvOG3-Cv6U
[ "everyone" ]
[ "David Krueger" ]
ICLR.cc/2014/conference
2014
reply: I agree on all the points made about Theorem 3.1. 'supp_k(z) = supp_k(W^T*x)' would be clearer as well as more succinct. I also encourage you to put a little box at the end of the proof. And If I understand it correctly, you can use a weaker condition, namely: k*mu < z_k/z_1 (note strict inequality)
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
bbr_bzvcfMSDd
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
88wtvqkZBJaOo
comment
1,392,782,820,000
PPfbPOaNgVPWE
[ "everyone" ]
[ "Alireza Makhzani" ]
ICLR.cc/2014/conference
2014
reply: We very much appreciate your constructive feedbacks. 1. 'the method is a bit flawed because it does not control sparsity across samples (yielding to possibly many dead units). It would be very helpful to add experiments with a few different values of code dimensionality. For instance on MNIST, it would be interesting to try: 1000, 2000 and 5000.' Thank you for raising this concern. All the reported experiments on NORB has been done with 4000 hidden units. We have also done experiments with 4000 hidden units on MNIST and with a proper scheduling of k, we were able to train almost all the filters of the autoencoder and obtain classification result of 1.15% before fine-tuning and 0.99% after fine-tuning. In the case of MNIST, when we start off with a large sparsity level, almost all the filters get trained in the first 50 epochs. As we decrease the sparsity level, the filters start evolving to more global filters and the length of digit strokes start increasing. We didn't report the results with 4000 hidden units on MNIST so that we could have a fair comparison with other works that use 1000 hidden units. Based on your feedback, we will include this result and the details of the experiment in the final manuscript. 2. 'The paper is fairly incremental in its novelty. There are several other papers that used similar ideas' Thank you for raising concerns about the related works. We would like to point out that there are important differences between our paper and the works you mentioned. We compared our method to the marginal regression method in response to Phil Bachman's comment (see above). In short, While we are addressing the conventional sparse coding problem with a Euclidean cost function, Krishnakumar's paper defines a different cost function using a non-parametric kernel function applied on data. We derived our operator from iterative hard thresholding which is completely different from marginal regression and behave differently. Another difference is that marginal regression uses an L1 penalty on the absolute value of the least square coefficients to promote sparsity. We have tried using the L1 norm instead of the L0 norm in our algorithm and we were not able to train the model. So, using the L0 norm makes a significant difference. We use our operator to regularize deep neural nets and do supervised learning while they use marginal regression in an unsupervised fashion and then train a SVM on top of that for classification. The analysis we provide for algorithm is quite different and in our view complementary to the results in the paper that you mentioned. Based on your feedback, we will include this comparison in our paper. Although there are interesting connections between 'Compete to Compute' paper and our paper, The focus and details of the two works are rather different. The focus of our paper is sparse coding while the 'Compete to Compute' paper splits the hidden units to the several groups of two hidden units, and pick the largest hidden unit within each group. So exactly half of the hidden units are always active at any time (similar to 50% dropout) and there is no sparsity in the hidden representation. Our operator is also quite different as it picks the k largest hidden units among all hidden units while they pick one single hard winner within each group. 3. 'lack of comparison' We have compared our method to several methods including dropout, denoising autoencoder, DBN, DBM and third-order RBM. Regarding the LISTA and PSD methods, we have cited them and made a brief comparison in the introduction of the paper. Our error rate on MNIST before fine-tuning is 1.38% while LISTA's best error rate is 2.15% and PSD's is 4.5% (1000 samples/class). Based on your feedback, we will also add these numerical comparisons in the final manuscript. Also, In the Coates and Ng's paper, the thresholding operator is only used at test time and training is performed using other algorithms, such as OMP-k, which are very slow. Another difference is that they use a fixed and pre-defined soft thresholding operator and do not have control over the sparsity level, while we are using a hard thresholding operator in which the threshold is adaptive and is equal to the k-th largest element of the input. Thank you again for bringing up these issues, since it helps us better place our work in context.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
TTfOb3K5AclE7
review
1,391,148,960,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
oV1qV9LKY43KA
comment
1,392,783,300,000
l6rclPZecziqO
[ "everyone" ]
[ "Alireza Makhzani" ]
ICLR.cc/2014/conference
2014
reply: Thank you very much for your helpful comments. It will be straightforward for us to address the ambiguities you raised about the content of the algorithm box, and produce a version that is clearer in the final manuscript.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
mOSRjNNtRhj91
review
1,391,858,040,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "anonymous reviewer c32b" ]
ICLR.cc/2014/conference
2014
title: review of k-Sparse Autoencoders review: * Brief summary of the paper: The paper investigates a very simple heuristic technique for a simple autoencoder to learn a sparse encoding: the nonlinearity consists in only retaining the top-k linear activations among the hidden units and setting the others to zero. With the thus trained k-sparse autoencoers, the authors were able to outperform (on MNIST and NORB) features learned with denoising autoencoders, RBMs or dropout, as well as deep networks pre-trained with those techniques and fine-tuned. * Assessment: This is an interesting investigation of a very simple approach to obtaining sparse representations that empirically seem to perform very well. It does have a number of weaknesses: - I find it misleading to call this a *linear model* as in the abstract. A piecewise linear function is *not* a linear function. The model does use a non-linear sparsification operation. - For such a simple approach, I find the actual description of the algorithm (especially in the algorithm box) disappointingly fuzzy, unclear, confusing and probably wrong: What is the exact objective being optimized? Is is always squared reconstruction error? It is not written in the box. Also do you really reconstruct x^ from a z that has not been sparsified (as is written in step 1)?? This is contrary to my understanding from reading the rest of the paper. I believe it would be much clearer to introduce an explicit sparsification step before the reconstruction. Similarly, with your definition of supp, it looks like the result of your sparse encoding h is a *set of indices* rather than a sparse vector. Is this intended? Wouldn't it be clearer to define an operation that returns a sparse vector rather than a set of indices? The algorithm box should be rewritten more formally, removing any ambiguity. - Section 3.3: While I find the discussion on the importance of decoherence interesting, I do not believe you can formally draw your conclusion from it, since you do not have the strict equality x = Wz (perfect reconstruction) that your theorem depends on but only an approximate reconstruction. So I would mitigate the final claims. - I wonder how thoroughly you have explored the hyper-parameter space for the other pre-training algorithms you compare yourself with, especially those that are expected to influence sparsity or control capacity somehow, as e.g. the noise level for denoising autoencoders and dropout?? Did you but try a single a-priori chosen value? If so the comparisons might be a little unfair since you hyper-optimized your alpha on the validation set. * Pros and Cons: Pros: + interesting approach due to its simplicity + very good empirical classification performance. Cons: - confusing description of the algorithm (in algorithm box); - possibly insufficient exploration of hyper-parameters of competing algorithms (relative to the amount of tweakings of the proposed approach).
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
IQiNInntNzvtF
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
ssEosd49J-s3f
review
1,391,149,020,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: This paper should be citing: 'Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations', by Krishnakumar Balasubramanian, Kai Yu, and Guy Lebanon. In their work, they use a sparse coding subroutine effectively identical to your sparse coding method. The only difference in their encoding step is that they threshold the magnitude-sorted set of potential coefficients by cumulative L1 norm rather than cumulative L0 norm (a minor difference). They also add a regularization term to their objective designed to approximately minimize coherence of the learned dictionary, which may be worth trying as an addition to your current approach. This general class of techniques, i.e. marginal regression, is reasonably well-known and has been investigated previously as a quick-and-dirty approximation to the Lasso. For more detail, see: 'A Comparison of the Lasso and Marginal Regression', by Genovese et. al. I haven't looked at this paper in a while, but it should contribute to your theory, as they present results similar to yours, but in the context of approximating the Lasso in a linear regression setting. It might be interesting to try a 'marginalized dropout' encoding at test time, in which each coefficient is scaled by it's probability of being among the top-k coefficients when the full coefficient set is subject to, e.g., 50% dropout. This would correspond to a simple rescaling of each coefficient by its location in the magnitude-sorted list. The true top-k would still be included fully in the encoding, while coefficients outside the true top-k would quickly shrink towards 0 as you move away from the true top-k. The shrinkage factors could be pre-computed for all possible sorted list positions based on the CDF of a Binomial distribution. If 'true' sparsity is desired, the shrunken coefficients could be hard-thresholded at some small epsilon. This would 'smooth' the encoding, perhaps removing aliasing effects that may occur with a hard threshold. This would be a fairly natural encoding to use if dictionary elements were also subject to (unmarginalized) dropout at training time. The additional computational cost would be trivial, as shrinkage and thresholding would be applied simultaneously to the magnitude-sorted coefficient list with a single element-wise vector product. Paper 1: http://www.cc.gatech.edu/~lebanon/papers/ssc_icml13.pdf Paper 2: http://www.stat.cmu.edu/~jiashun/Research/Year/Marginal.pdf
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
BB-qwYPO53cZd
comment
1,391,370,300,000
TTfOb3K5AclE7
[ "everyone" ]
[ "Alireza Makhzani" ]
ICLR.cc/2014/conference
2014
reply: Thank you Phil for referring us to Krishnakumar's paper. We read this paper and although there are interesting connections between the two works, there are important differences as well: 1) While we are addressing the conventional sparse coding problem with a Euclidean cost function, Krishnakumar's paper defines a different cost function using a non-parametric kernel function applied on data. As a result, their hidden representation is modeling the neighborhood of training examples rather than reconstructing the individual samples. 2) Although iterative hard thresholding (that we use) and marginal regression are both alternatives of LASSO, they are quite different algorithms and may behave quite differently. Iterative hard thresholding (IHT) is an iterative procedure for sparse recovery that uses L0 projection and refines the estimated support set at each iteration. If we use only the first iteration of IHT to learn the dictionary, we obtain our K-sparse autoencoder. But we can use more iterations (at the training or test time) and get better results at the price of more computational complexity. See http://www.see.ed.ac.uk/~tblumens/papers/BDIHT.pdf for more details about IHT. Marginal regression, however, is a different algorithm that uses an L1 penalty on the absolute value of the least square coefficients to promote sparsity. We have tried using an L1 norm instead of the L0 norm in our algorithm and we were not able to train the model. So, using the L0 norm makes a significant difference. We have also done experiments using the absolute value of the hidden representation and observed that taking absolute value hurts the performance in our setting. Based on your feedback, perhaps these results should be included? 3) We have been able to regularize deep neural nets using our method and obtain better results than dropout and the denoising autoencoder on both MNIST and NORB. Using the neural nets gives us the advantage of fine-tuning a 'supervised learning task' with our thresholding operator. But Krishnakumar's paper obtains features modeling the neighborhood in an 'unsupervised fashion' and then uses an SVM on top of that for classification. So our algorithm could also be viewed as a regularization method for deep nets using sparsity. 4) The analysis we provide for algorithm is quite different and in our view complementary to the results in the papers that you mentioned. Thank you for bringing this other work to our attention, since it helps us better place our work in context, and also thanks for mentioning the 'marginalized dropout' idea, as we think it is definitely worth trying as an addition to our work.
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
ZpJXvOG3-Cv6U
review
1,387,854,420,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Markus Thom" ]
ICLR.cc/2014/conference
2014
review: This is an interesting paper I would like to comment on and ask the authors some questions. The paper studies auto-encoders where the internal representation is approximated online by a vector whose number of non-vanishing coordinates is restricted (in other words, the internal representation is projected onto the set of all vectors with a certain L0 pseudo-norm). The proposed model, 'k-sparse autoencoders', is put in context with Arian Maleki's 'iterative thresholding with inversion' algorithm for inference of sparse code words, and a criterion is given to identify the case when one iteration is enough for perfect inference. Experiments on MNIST and small NORB show that superior classification performance (wrt. RBMs, dropout, denoising auto-encoders) can be achieved when using the proposed model to generate features and processing them with logistic regression, and that the classification performance is competitive when all the models were additionally fine-tuned. This is a cool result, since the only non-linearity used for the features is the projection. We have done something similar, see Section 3 of the paper available at http://jmlr.org/papers/v14/thom13a.html, by using projections onto sets on which certain sparseness measures (including the L0 pseudo-norm) attain a constant value as neural transfer function in a hybrid of an auto-encoder and an MLP. Inference of the internal representation can be understood here as carrying out the first iteration of a projected Landweber algorithm. Perhaps the authors would like to discuss the relationship between both approaches? The last sentence in the discussion after the proof of Theorem 3.1 is a bit puzzling. The theorem shows that if mu is small, then the supports of z and W^T*x are identical. The aforementioned sentence says that these supports are identical, hence mu must be small. I believe this is the converse of the theorem's statement and has not been proven, since there may be reasons other than mu being small. The description of the k schedule in Section 4.2.1 is ambiguous. Does this mean that when we have 100 epochs, say, that k follows a linear function for epochs 1 thru 50, and then remains at the minimum level for epochs 51 to 100, or does it mean that in each epoch for the first halve of the presented samples k is adjusted and stays at the minimum for the remaining samples of the epoch, and then the schedule starts all over again in the next epoch? There are still some dead hidden units in the figures on page 6, even for k = 70 on MNIST. Would it help to increase the initial k value in the schedule, or maybe add some small random numbers to z (with some annealed variance) after setting the small entries to zero, such that backprop adjusts all the hidden units? Just a few things I noticed while reading through the manuscript: - The formulation of the claim of Theorem 3.1 could be altered to be more succinct, e.g. 'supp_k(z) = supp_k(W^T*x)'. In the proof, i should be from {1, ..., k}, since i = 0 doesn't seem to make sense here. - Typo on page 3, left column: 'tarining'
QDm4QXNOsuQVE
k-Sparse Autoencoders
[ "Alireza Makhzani", "Brendan Frey" ]
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is a linear model, but where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and restricted Boltzmann machines. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
[ "autoencoders", "sparsity", "representations", "way", "performance", "classification tasks", "methods", "combinations", "activation functions", "steps" ]
https://openreview.net/pdf?id=QDm4QXNOsuQVE
https://openreview.net/forum?id=QDm4QXNOsuQVE
tmV9T33Eo0myh
review
1,391,159,880,000
QDm4QXNOsuQVE
[ "everyone" ]
[ "Phil Bachman" ]
ICLR.cc/2014/conference
2014
review: I apologize for the clutter. The web interface was not responding on my end, though it apparently processed most of my requests on the server side of things. If anyone has moderator privileges, I would appreciate it if all but one of my earlier comments could be removed.