forum_id
stringlengths
8
20
forum_title
stringlengths
4
171
forum_authors
sequencelengths
0
25
forum_abstract
stringlengths
4
4.27k
forum_keywords
sequencelengths
1
10
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
13
note_type
stringclasses
6 values
note_created
int64
1,360B
1,736B
note_replyto
stringlengths
8
20
note_readers
sequencelengths
1
5
note_signatures
sequencelengths
1
1
venue
stringclasses
26 values
year
stringclasses
11 values
note_text
stringlengths
10
16.6k
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
iPpSPn9bTwn4Y
review
1,363,291,260,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6f71" ]
ICLR.cc/2013/conference
2013
review: I read the updated version of the paper. I has indeed been improved substantially, and my concerns were addressed. It should clearly be accepted in its current form.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
LuEnLatTnvu1A
comment
1,363,216,800,000
ttBP0QO8pKtvq
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2013/conference
2013
reply: We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf * Regarding the differences between equation (1) and equation (7), it comes from moving from p(z) to the conditional p(y|x). This is emphasized in the text introducing equation (7), explaining in more details how one goes from (1) to (7). * Regarding the final arguments and overall presentation of the arguments in the paper, we have reworked the overall writeup of the paper in a way that you will hopefully find satisfactory.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
aaN5bD_cRqbLk
review
1,363,216,740,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "Razvan Pascanu" ]
ICLR.cc/2013/conference
2013
review: We would like to thank all the reviewers for their feedback and insights. We had submitted a new version of the paper (it should appear on arxiv on Thu, 14 Mar 2013 00:00:00 GMT, though it can be retrieved now from http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf We kindly ask the reviewers to look at. The new paper contains drastic changes that we believe will improve the quality of the paper. In a few bullet points the changes are: * The title of the paper was changed to reflect our focus on natural gradient for deep neural networks * The wording and structure of the paper was slightly changed to better reflect the final conclusions * We improved notation, providing more details where they were missing * Additional plots were added as empirical proof to some of our hypotheses * We've added both the pseudo-code as well as link to a Theano-based implementation of the algorithm
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
XXo-vXWa-ZvQL
review
1,363,288,920,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: The revised arxiv paper is available now, and we replied to the reviewers comments.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
ttBP0QO8pKtvq
review
1,361,998,920,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6a77" ]
ICLR.cc/2013/conference
2013
title: review of Natural Gradient Revisited review: GENERAL COMMENTS The paper promises to establish the relation between Amari's natural gradient and many methods that are called Natural Gradient or can be related to Natural Gradient because they use Gauss-Newton approximations of the Hessian. The problem is that I find the paper misleading. In particular the G of equation (1) is not the same as the G of equation (7). The author certainly points out that the crux of the matter is to understand which distribution is used to approximate the Fisher information matrix, but the final argument is a mess. This should be done a lot more rigorously (and a lot less informally.) As the paper stands, it only increases the level of confusion. SPECIFIC COMMENTS * (ichi Amari, 1997) - > (Amari, 1997) * differ -> defer * Due to this surjection: A surjection is something else! * Equation (1): please make clear that the expectation is an expectation on z distributed according p_theta (not the ground truth nor the empirical distribution). Equation (7) then appears to be a mix of both. * becomes the conditional p_ heta(t|x) where q(x) represents: where is q in p_ heta(t|x)
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
_MfuTMZ4u7mWN
review
1,364,251,020,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6a77" ]
ICLR.cc/2013/conference
2013
review: Clearly, the revised paper is much better than the initial paper to the extent that it should be considered a different paper that shares its title with the initial paper. The ICLR committee will have to make a policy decision about this. The revised paper is poorly summarized by it abstract because it does not show things in the same order as the abstract. The paper contains the following: * A derivation of natural gradient that does not depend on information geometry. This derivation is in fact well known (and therefore not new.) * A clear discussion of which distribution should be used to compute the natural gradient Riemannian tensor (equation 8). This is not new, but this is explained nicely and clearly. * An illustration of what happens when one mixes these distributions. This is not surprising, but nicely illustrates the point that many so-called 'natural gradient' algorithms are not the same as Amari's natural gradient. * A more specific discussion of the difference between LeRoux 'natural gradient' and the real natural gradient with useful intuitions. This is a good clarification. * A more specific discussion of how many second order algorithms using the Gauss-Newton approximation are related to some so-called natural gradient algorithms which are not the true natural gradient. Things get confusing because the authors seem committed to calling all these algorithms 'natural gradient' despite their own evidence. In conclusion, although novelty is limited, the paper disambiguates some of the confusion surrounding natural gradient. I simply wish the authors took their own hint and simply proposed banning the words 'natural gradient' to describe things that are not Amari's natural gradient but are simply inspired by it.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
j5Y_3gJAHK3nP
comment
1,363,217,040,000
26sD6qgwF8Vob
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf * Regarding the relationship between Hessian-Free and natural gradient, it stems from the fact that algebraic manipulations of the extended Gauss-Newton approximation of the Hessian result in the natural gradient metric. Due to space limit (the paper is quite lengthly) we do not provide all intermediate steps in this algebraic manipulation, but we do provide all the crucial ones. Both natural gradient and Hessian Free have the same form (the gradient is multiplied by the inverse of a matrix before being subtracted from theta, potentially times some scalar learning rate). Therefore showing that both methods use the same matrix is sufficient to show that HF can be interpreted as natural gradient. * The degeneracy of theta were meant to suggest only that we are dealing with a lower dimensional manifold. We completely agree however that the text was confusing and it was completely re-written to avoid that potential confusion. In the re-write we've removed this detail as it is not crucial for the paper. * The relations at the end of page 2 do hold in general, as the expectation is taken over z (a detail that we specify now). We are not using a fully Bayesian framework, i.e. theta is not a random variable in the text. * Equation 15 was corrected. When computing the Hessian, we compute the derivatives with respect to `r`.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
wiYbiqRc-GqXO
review
1,362,084,780,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: Thank you for your comments. We will soon push a revision to fix all the grammar and language mistakes you pointed out. Regarding equation (1) and equation (7), mathbf{G} represents the Fisher Information Matrix form of the metric resulting when you consider respectively p(x) vs p(y|x). Equation (1) is introduced in section 1, which presents the generic case of a family of distributions p_{ heta}(x). From section 3 onwards we adapt these equations specifically to neural networks, where, from a probabilistic point of view, we are dealing with conditional probabilities p(y|x). Could you please be more specific regarding the elements of the paper that you found confusing? We would like to reformulate the conclusion to make our contributions clearer. The novel points we are trying to make are: (1) Hessian Free optimization and Krylov Subspace Descent, as long as they use the Gauss-Newton approximation of the Hessian, can be understood as Natural Gradient, because the Gauss-Newton matrix matches the metric of Natural Gradient (and the rest of the pipeline is the same). (2) Possibly due to the regularization effect discussed in (6), we hypothesize and support with empirical results that Natural Gradient helps dealing with the early overfitting problem introduced by Erhan et al. This early overfitting problem might be a serious issue when trying to scale neural networks to large models with very large datasets. (3) We make the observation that since the targets get integrated out when computing the metric of Natural Gradient, one can use unlabeled data to improve the accuracy of this metric that dictates the speed with which we move in parameter space. (4) Natural Gradient introduced by Nicolas Le Roux et al has a fundamental difference with Amari's. It is not just a different justification, but a different algorithm that might behave differently in practice. (5) Natural Gradient is different from a second order method because while one uses second order information, it is not the second order information of the error function, but of the KL divergence (which is quite different). For e.g. it is always positive definite by construction, while the curvature is not. Also, when considering the curvature of the KL, is not the curvature of the same surface throughout learning. At each step we have a different KL divergence and hence a different surface, while for second order methods the error surface stays constant through out learning. The second distinction is that Natural Gradient is naturally suited for online learning, provided that we have sufficient statistics to estimate the KL divergence (the metric). Theoretically, second order methods are meant to be batch methods (because the Hessian is supposed over the whole dataset) where the Natural Gradient metric only depends on the model. (6) The standard understanding of Natural Gradient is that by imposing the KL divergence between p_{theta}(y|x) and p_{theta+delta}(y|x) to be constant it ensures that some amount of progress is done at every step and hence it converges faster. We add that it also ensures that you do not move too far in some direction (which would make the KL change quickly), hence acting as a regularizer. Regarding the paper not being formal enough we often find that a dry mathematical treatment of the problem does not help improving the understanding or eliminating confusions. We believe that we were formal enough when showing the equivalence between the generalized Gauss-Newton and Amari's metric. Point (6) of our conclusion is a hypothesis which we validate empirically and we do not have a formal treatment for it.
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
26sD6qgwF8Vob
review
1,362,404,760,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 1939" ]
ICLR.cc/2013/conference
2013
title: review of Natural Gradient Revisited review: This paper attempts to reconcile several definitions of the natural gradient, and to connect the Gauss-Newton approximation of the Hessian used in Hessian free optimization to the metric used in natural gradient descent. Understanding the geometry of objective functions, and the geometry of the space they live in, is crucial for model training, and is arguably the greatest bottleneck in training deep or otherwise complex models. However, this paper makes a confused presentation of the underlying ideas, and does not succeed in clearly tying them together. More specific comments: In the second (and third) paragraph of section 2, the natural gradient is discussed as if it stems from degeneracies in theta, where multiple theta values correspond to the same distribution p. This is inaccurate. Degeneracies in theta have nothing to do with the natural gradient. This may stem from a misinterpretation of the role of symmetries in natural gradient derivations? Symmetries are frequently used in the derivation of the natural gradient, in that the metric is frequently chosen such that it is invariant to symmetries in the parameter space. However, the metric being invariant to symmetries does not mean that p is similarly invariant, and there are natural gradient applications where symmetries aren't used at all. (You might find The Natural Gradient by Analogy to Signal Whitening, Sohl-Dickstein, http://arxiv.org/abs/1205.1828 a more straightforward introduction to the natural gradient.) At the end of page 2, between equations 2 and 3, you introduce relations which certainly don't hold in general. At the least you should give the assumptions you're using. (also, notationally, it's not clear what you're taking the expectation over -- z? theta?) Equation 15 doesn't make sense. As written, the matrices are the wrong shape. Should the inner second derivative be in terms of r instead of theta? The text has minor English difficulties, and could benefit from a grammar and word choice editing pass. I stopped marking these pretty early on, but here are some specific suggested edits: 'two-folded' -> 'two-fold' 'framework of natural gradient' -> 'framework of the natural gradient' 'gradient protects about' -> 'gradient protects against' 'worrysome' -> 'worrisome' 'even though is called the same' -> 'despite the shared name' 'differ' -> 'defer' 'get map' -> 'get mapped'
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature.
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
0mPCmj67CX0Ti
review
1,364,262,660,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 1939" ]
ICLR.cc/2013/conference
2013
review: As the previous reviewer states, there are very large improvements in the paper. Clarity and mathematical precision are both greatly increased, and reading it now gives useful insight into the relationship between different perspectives and definitions of the natural gradient, and Hessian based methods. Note, I did not check the math in Section 7 upon this rereading. It's misleading to suggest that the author's derivation in terms of minimizing the objective on a fixed-KL divergence shell around the current location (approximated as a fixed value of the second order expansion of the Fisher information) is novel. This is something that Amari also did (see for instance the proof of Theorem 1 on page 4 in Amari, S.-I. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2), 251–276. doi:10.1162/089976698300017746. This claim should be removed. It could still use an editing pass, and especially improvements in the figure captions, but these are nits as opposed to show-stoppers (see specific comments below). This is a much nicer paper. My only significant remaining concerns are in terms of the Lagrange-multiplier derivation, and in terms of precedent setting. It would be that it's a dangerous precedent to set (and promises to make much more work for future reviewers!) to base acceptance decisions on rewritten manuscripts that differ significantly from the version initially submitted. So -- totally an editorial decision. p. 2, footnote 2 -- 3rd expression should still start with sum_z 'emphesis' -> emphasize' 'to speed up' -> 'to speed up computations' 'train error' -> 'training error' Figure 2 -- label panes (a) and (b) and reference as such. 'KL, different training minibatch' appears to be missing from Figure. In latex, use ` for open quote and ' for close quote. capitalize kl. So, for instance, `KL, unlabeled' Figure 3 -- Caption has significant differences from figure in most places where it occurs, should refer to 'the natural gradient' rather than 'natural gradient' 'equation (24) from section 3' -- there is no equation 24 in section 3. Equation and Section should be capitalized.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
_ia0VPOP0SVPj
review
1,362,189,120,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer 600b" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: Herding is a relatively recent idea [23]: create a dynamical system that evolves a vector, which when time-averaged will match desired expectations. Originally it was designed as a novel means to generalize from observed data with measured moments. In this work, the conditional distributions of a Gibbs sampler are matched, with the hope of sampling from arbitrary target distributions. As reviewed by the paper itself, this work joins only a small number of recent papers that try to simulate arbitrary target distributions using a deterministic dynamical system. Compared to [19] this work potentially works better in some situations: O(1/T) convergence can happen, whereas [19] seems to emulate a conventional Gibbs sampler with O(1/T^2) convergence. However, the current work seems to be more costly in memory and less-generally applicable than Gibbs sampling, because it needs to track weights for all possible conditional distributions (all possible neighbourhood settings for each variable) in some cases. The comparison to [7] is less clear, as that is motivated by O(1/T) QMC rates, but I don't know if/how it would compare to the current work. (No comparison is given.) One of the features of Markov chain Monte Carlo methods, such as Gibbs sampling, is that represents _joint_ distributions, through examples. Unlike variational approximation methods, no simple form of the distribution is assumed, but Monte Carlo sampling may be a less efficient way to get marginal distributions. For example, Kuss and Rasmussen http://www.jmlr.org/papers/volume6/kuss05a/kuss05a.pdf demonstrated that EP gives exceedingly accurate posterior marginals with Gaussian process classifiers, even though its joint approximation, a Gaussian, is obviously wrong. The experiment in section 4.1 suggests that the herded Gibbs procedure is prepared to move through low probability joint settings more often than it 'should', but gets better marginals as a result. The experiment section 4.2 also depends only on low-dimensional marginals (as many applications do). The experiment in section 4.3 involves an optimization task, and I'm not sure how herded Gibbs was applied (also with annealing? The most probable sample chosen? ...). This is an interesting, novel paper, that appears technically sound. The most time-consuming research contributions are the proofs in the appendices, which seem plausible, but I have not carefully checked them. As discussed in the conclusion, there is a gap between the applicability of this theory and the applicability of the methods. But there is plenty in this paper to suggest that herded sampling for generic target distributions is an interesting direction. As requested, a list of pros and cons: Pros: - a novel approach to sampling from high-dimensional distributions, an area of large interest. - Good combination of toy experiments, up to fairly realistic, but harder to understand, demonstration. - Raises many open questions: could have impact within community. - Has the potential to be both general and fast to converge: in long term could have impact outside community. Cons: - Should possibly compare to Owen's work on QMC and MCMC. Although there may be no interesting comparison to be made. - The most interesting example (NER, section 4.3) is slightly hard to understand. An extra sentence or two could help greatly to state how the sampler's output is used. - Code could be provided. Very minor: paragraph 3 of section 5 should be rewritten. It's wordy: 'We should mention...We have indeed studied this', and uses jargon that's explained parenthetically in the final sentence but not in the first two.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
-wDkwa3mkYwTa
review
1,362,382,860,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer b2c5" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: This paper shows how Herding, a deterministic moment-matching algorithm, can be used to sample from un-normalized probabilities, by applying Herding to the full-conditional distributions. The paper presents (1) theoretical proof of O(1/T) convergence in the case of empty and fully-connected graphical models, as well as (2) empirical evidence, showing that Herded Gibbs sampling outperforms both Gibbs and mean-field for 2D structured MRFs and chain structured CRFs. This improved performance however comes at the price of memory, which is exponential in the maximum in-degree of the graph, thus making the method best suited to sparsely connected graphical models. While the application of Herding to sample from joint-distributions through its conditionals may not appear exciting at first glance, I believe this represents a novel research direction with potentially high impact. A 1/T convergence rate would be a boon in many domains of application, which tend to overly rely on Gibbs sampling, an old and often brittle sampling algorithm. The algorithm's exponential memory requirements are somewhat troubling. However, I believe this can be overlooked given the early state of research and the fact that sparse graphical models represent a realistic (and immediate) domain of application. The paper is well written and clear. I unfortunately cannot comment on the correctness of the convergence proofs (which appear in the Appendix), as those proved to be too time-consuming for me to make a professional judgement on. Hopefully the open review process of ICLR will help weed out any potential issues therein. PROS: * A novel sampling algorithm with faster convergence rate than MCMC methods. * Another milestone for Herding: sampling for un-normalized probabilities (with tractable conditionals). * Combination of theoretical proofs (when available) and empirical evidence. * Experiments are thorough and span common domains of application: image denoising through MRFs and Named Entity Recognition through chain-CRFs. CONS: * Convergence proofs hold for less than practicle graph structures. * Exponential memory requirements of the algorithm make Herded Gibbs sampling impractical for lage families of graphical models, including Boltzmann Machines.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
55Sf5h7-bs1wC
review
1,363,408,140,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Luke Bornn, Yutian Chen, Nando de Freitas, Mareija Eskelin, Jing Fang, Max Welling" ]
ICLR.cc/2013/conference
2013
review: Taking the reviewers' comments into consideration, and after many useful email exchanges with experts in the field including Prof Art Owen, we have prepared a newer version of the report. If it is not on Arxiv by the time you read this, you can find it at http://www.cs.ubc.ca/~nando/papers/herding_ICLR.pdf Reviewer Anonymous 600b: We have made the code available and expanded our description of the CRF for NER section. An empirical comparison with the work of Art Owen and colleagues was not possible given the short time window of this week. However, we engaged in many discussions with Art and he not only added his comments here in open review, but also provided many useful comments via email. One difference between herding and his approach is that herding is greedy (that is, the random sequence does not need to be constructed beforehand). Art also pointed us out to the very interesting work of James Propp and colleagues on Rotor-Router models. Please see our comments in the last paragraph of the Conclusions and Future Work section of the new version of the paper. Prof Propp has also begun to look at the problem of establishing connections between herding and his work. Reviewer Anonymous cf4e: For marginals, the convergence rate of herded Gibbs is also O(1/T) because marginal probabilities are linear functions of the joint distribution. However, in practice, we observe very rapid convergence results for the marginals, so we might be able to strengthen these results in the future. Reviewer Anonymous 2d06: We have added more detail to the CRF section and made the code available so as to ensure that our results are reproducible. We thank all reviewers for excellent comments. This openreview discussion has been extremely useful and engaging. Many thanks, The herded Gibbs team
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
kk_CoX43Cfks-
review
1,363,761,180,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Maya Baya" ]
ICLR.cc/2013/conference
2013
review: The updated Herded Gibbs report is now available on arxiv at the following url: http://arxiv.org/abs/1301.4168v2 The herded Gibbs team.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
OOw6hkBUq_fEr
review
1,362,793,920,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Maya Baya" ]
ICLR.cc/2013/conference
2013
review: Dear reviewers, Thank you for the encouraging reviews and useful feedback. We will soon address your questions and comments. To this end, we would like to begin by announcing that the code is available online in both matlab and python, at: http://www.mareija.ca/research/code/ This code contains both the image denoising experiments and the two node example, however, we have omitted the NER experiment because the code is highly dependent on the Stanford NER software. Nonetheless, upon request, we would be happy to share this more complex code too. A comprehensive reply and a newer version of the arxiv paper addressing your concerns will appear soon. In the meantime, we look forward to further comments. The herded Gibbs team.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
cAhZAfXPZ6Sfw
review
1,362,189,120,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer 600b" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: Herding is a relatively recent idea [23]: create a dynamical system that evolves a vector, which when time-averaged will match desired expectations. Originally it was designed as a novel means to generalize from observed data with measured moments. In this work, the conditional distributions of a Gibbs sampler are matched, with the hope of sampling from arbitrary target distributions. As reviewed by the paper itself, this work joins only a small number of recent papers that try to simulate arbitrary target distributions using a deterministic dynamical system. Compared to [19] this work potentially works better in some situations: O(1/T) convergence can happen, whereas [19] seems to emulate a conventional Gibbs sampler with O(1/T^2) convergence. However, the current work seems to be more costly in memory and less-generally applicable than Gibbs sampling, because it needs to track weights for all possible conditional distributions (all possible neighbourhood settings for each variable) in some cases. The comparison to [7] is less clear, as that is motivated by O(1/T) QMC rates, but I don't know if/how it would compare to the current work. (No comparison is given.) One of the features of Markov chain Monte Carlo methods, such as Gibbs sampling, is that represents _joint_ distributions, through examples. Unlike variational approximation methods, no simple form of the distribution is assumed, but Monte Carlo sampling may be a less efficient way to get marginal distributions. For example, Kuss and Rasmussen http://www.jmlr.org/papers/volume6/kuss05a/kuss05a.pdf demonstrated that EP gives exceedingly accurate posterior marginals with Gaussian process classifiers, even though its joint approximation, a Gaussian, is obviously wrong. The experiment in section 4.1 suggests that the herded Gibbs procedure is prepared to move through low probability joint settings more often than it 'should', but gets better marginals as a result. The experiment section 4.2 also depends only on low-dimensional marginals (as many applications do). The experiment in section 4.3 involves an optimization task, and I'm not sure how herded Gibbs was applied (also with annealing? The most probable sample chosen? ...). This is an interesting, novel paper, that appears technically sound. The most time-consuming research contributions are the proofs in the appendices, which seem plausible, but I have not carefully checked them. As discussed in the conclusion, there is a gap between the applicability of this theory and the applicability of the methods. But there is plenty in this paper to suggest that herded sampling for generic target distributions is an interesting direction. As requested, a list of pros and cons: Pros: - a novel approach to sampling from high-dimensional distributions, an area of large interest. - Good combination of toy experiments, up to fairly realistic, but harder to understand, demonstration. - Raises many open questions: could have impact within community. - Has the potential to be both general and fast to converge: in long term could have impact outside community. Cons: - Should possibly compare to Owen's work on QMC and MCMC. Although there may be no interesting comparison to be made. - The most interesting example (NER, section 4.3) is slightly hard to understand. An extra sentence or two could help greatly to state how the sampler's output is used. - Code could be provided. Very minor: paragraph 3 of section 5 should be rewritten. It's wordy: 'We should mention...We have indeed studied this', and uses jargon that's explained parenthetically in the final sentence but not in the first two.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
wy2cwQ8QPVybX
review
1,362,377,040,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer cf4e" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: Herding has an advantage over standard Monte Carlo method, in that it estimates some statistics quickly, while Monte Carlo methods estimate all statistics but more slowly. The paper presents a very interesting but impractical attempt to generalize Herding to Gibbs sampling by having a 'herding chain' for each configuration of the Markov blanket of the variables. In addition to the exponential memory complexity, it seems like the method should have an exponentially large constant hidden in the O(1/T) convergence rate: Given that there are many herding chains, each herding parameter would be updated extremely infrequently, which would result in an exponential slowdown of the Herding effect and thus increase the constant in O(1/T). And indeed, lambda from theorem 2 has a 2^N factor. The theorem is interesting in that it shows eventual O(1/T) convergence in full distribution: that is, the empirical joint distribution eventually converges to the full joint distribution. However, in practice we care about estimating marginals and not joints. Is it possible to show fast convergence on every subset of the marginals, or even on the singleton variables? Can it be done with a favourable constant? Can such a result be derived from the theorems presented in the paper? Results about marginals would be of more practical interest. The experiments show that the idea works in principle, which is good. In its current form, the paper presents a reasonable idea but is incomplete, since the idea is too impractical. It would be great if the paper explored a practical implementation of Gibbs herding, even an is approximate one. For example, would it be possible to represent w_{X_{Ni}} with a big linear function A X_{Ni} for all X and to herd A, instead of slowly herding the various W_{X_{Ni}}? Would it work? Would it do something sensible on the experiments? Can it be proved to work in a special case? In conclusion, the paper is very interesting and should be accepted. Its weakness is the general impracticality of the method.
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
PHnWHNpf5bHUO
review
1,363,212,720,000
2LzIDWSabfLe9
[ "everyone" ]
[ "Art Owen" ]
ICLR.cc/2013/conference
2013
review: Nando asked me for some comments and then he thought I should share them on openreview. So here they are. There are a few other efforts at replacing the IID numbers which drive MCMC. It would be interesting to explore the connections among them. Here is a sample: Jim Propp and others have been working on rotor-routers for quite a while. Here is one link: http://front.math.ucdavis.edu/0904.4507 I've been working with several people on replacing IID numbers by completely uniformly distributed (CUD) ones. This is like taking a small random number generator and using it all up. See this thesis by Su Chen for the latest results, and lots of references: www-stat.stanford.edu/~owen/students/SuChenThesis.pdf or for earlier work, this thesis by Seth Tribble: www-stat.stanford.edu/~owen/students/SethTribbleThesis.pdf The oldest papers in that line of work go back to the late 1960s and early 1970s by Chentsov and also by Sobol'. There is some very recent work by Dick Rudolf and Zhu: http://arxiv.org/abs/1303.2423 that is similar to herding. The idea there is to make a followup sample of values that fill in holes left after a first sampling. Not quite as close to this work but still related is the array-RQMC work of Pierre L'Ecuyer and others. See for instance: www.iro.umontreal.ca/~lecuyer/myftp/papers/mcqmc08-array.pdf
2LzIDWSabfLe9
Herded Gibbs Sampling
[ "Luke Bornn", "Yutian Chen", "Nando de Freitas", "Maya Baya", "Jing Fang", "Max Welling" ]
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an $O(1/T)$ convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.
[ "gibbs", "probabilistic graphical models", "gibbs sampler", "popular algorithms", "inference", "statistical models", "herding variant", "algorithm", "deterministic", "convergence rate" ]
https://openreview.net/pdf?id=2LzIDWSabfLe9
https://openreview.net/forum?id=2LzIDWSabfLe9
rafTmpD60FrZR
review
1,362,497,280,000
2LzIDWSabfLe9
[ "everyone" ]
[ "anonymous reviewer 2d06" ]
ICLR.cc/2013/conference
2013
title: review of Herded Gibbs Sampling review: The paper presents a deterministic 'sampling' algorithm for unnormalized distributions on discrete variables, similar to Gibbs sampling, which operates by matching the statistics of the conditional distribution of each node given its Markov blanket. Proofs are provided for the independent and fully-connected cases, with an impressive improvement in asymptotic convergence rate to O(1/T) over O(1/sqrt(T)) available from Monte Carlo methods in the fully-connected case. Experimental results demonstrate herded Gibbs outperforming traditional Gibbs sampling in the sparsely connected case, a regime unfortunately not addressed by the provided proofs. The algorithm's Achilles heel is its prohibitive worst-case memory complexity, scaling exponentially with the maximal node degree of the network. The paper is compelling for its demonstration that a conceptually simple deterministic procedure can (in some cases at least) greatly outperform Gibbs sampling, one of the traditional workhorses of Monte Carlo inference, both asymptotically and empirically. Though the procedure in its current form is of little use in large networks of even moderate edge density, the ubiquity of application domains involving very sparse interaction graphs makes this already an important contribution. The proofs appear to be reasonable upon cursory examination, but I have not as yet verified them in detail. PROS * A lucidly explained idea that gives rise to somewhat surprising theoretical results. * Proofs of convergence as well as experimental interrogations. * A step towards practical herding algorithms for dense unnormalized models, and an important milestone for the literature on herding in general. CONS * An (acknowledged) disconnect between theory and practice -- available proofs apply only in cases that are uninteresting or impractical. * Experiments in 4.3 make mention of NER with skip-chain CRFs, where Viterbi is not tractable, but resorts to experiments with chain CRFs instead. An additional experiment utilizing skip-chain CRFs (a more challenging inference task, not amenable to Viterbi) would have been more compelling, though I realize space is at a premium. Minor concerns: - The precise dimensionality of the image denoising problem is, as far as I can tell, never specified. This would be nice to know. - More details as to how the herded Gibbs procedure maps onto the point estimate provided as output on the NER task would be helpful -- presumably the single highest-probability sample is used?
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
wX4ew9_0vK2CA
comment
1,363,246,260,000
6s7Ys8Q5JbfHZ
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > The exposition on curriculum learning could be condensed. (minor) The demonstrative problem (sprite counting) is a visual perception problem and therefore carries with it the biases of our own perception and inferred strategies. Maybe the overall argument might be bolstered by the addition of a more abstract example? Yes that's right. We have performed an experiment in which all the possible bit configurations of a single patch were enumerated (there are 80 of them plus the special case where no object is present). With this input representation, there is no vision prior knowledge that can help a human learner. The task is strangely easier but still difficult: we achieved 25% test error up to now, i.e., better than chance (i.e. 50%) but far from the less than 1% error of the IKGNN. In future work, we will measure how well humans learn that task. > why so many image regions? Why use an 8x8 grid? Won't 3 regions suffice to make the point? Or is this related to the complexity of the problem. A related question: how are the results effected by the number of these regions? Maybe some reduced tests at the extremes would be interesting, i.e. with only 3 regions, and 32 (you have 64 already)? The current architecture is trained on 64=8x8 patches, with sprites centered inside the patches. We have also tried to train the IKGNN on 16x16 patches, corresponding to 4x4=16 patches in a same-size image, and we allowed the objects to be randomly translated inside the patch but IKGNN couldn't learn the task. Probably because of the translation, the P1NN required a convolutional architecture. We have also conducted experiments with the tetromino dataset (which is not in that paper) that has 16x16 images and objects are placed in 4x4 patches with less variations and 7 sprite categories. An ordinary MLP with 3 tanh hidden layers was able to learn this task after a very long training. We trained Structured MLP on the 3 patches(8x8) that has sprite in it for 120 training epochs with 100k training examples. The best result we could get with that setting is 37 percent error on training set and SMLP was still doing chance on the test set. In a nutshell, reducing the number of regions and centering the objects inside each sprite in those regions implies reducing the complexity of the problem and yes if you reduce the complexity of the problem, you reduce the complexity of the task and you start seeing models that can learn it, with ordinary MLPs learning the task after a very long training on a large training set. > In the networks that solve the task, are the weights that are learned symmetric over the image regions? i.e. are these weights identical (maybe up to some scaling and sign flip). Is there anything you have determined about the structure of the learned second layer of the IKGNN? In the first layer on each patch, we trained exactly the same (first level) MLP on each patch, while the second level MLP is trained on the standardized softmax probabilites of the first level. Hence the weights are shared across patches in the first level. The first level of IKGNN (P1NN) has translation equivariance, but the second level (P2NN) is fully-connected and does not have any prior knowledge of symmetries. > Furthermore, what about including a 'weight sharing' constraint in the general MLP model (the one that does not solve the problem, but has the same structure as the one that does)? Would including this constraint change the solution? (the constraint is already in the P1NN, but what about adding it into the P2NN?) Another way to ask this is: Is enforcing translation invariance in the network sufficient to achieve good performance, or do we need to specifically train for the sprite discrimination? Indeed, it would be possible to use a convolutional architecture (with pooling, because the output is for the whole image) for the second level as well. We have not tried that yet but we agree that it would indeed be an interesting possibility and we certainly plan to try it out. Up to now, though, we have found that enforcing translation equivariance (in the lower level) was important but not sufficient to solve the problem. Indeed, the poor result obtained by the structured MLP demonstrates that. > Do we know if humans can solve this problem 'in a glance?': flashing the image for a small amount of time ~100-200msecs. Either with or without a mask? It seems that the networks you have derived are solving such a problem 'in a glance.' We didn't conduct any trials for measuring response times and learning speed of human subjects on this dataset. However, we agree such a study would be an important follow-up to this paper. > Is there an argument to be made that the sequential nature of language allows humans to solve this task? Even the way you formulate the problem suggests this sequential process: 'are all of the sprites in the image the same?': in other words 'find the sprites, then decide if they are the same' When I imagine solving this problem myself, I imagine performing a more sequential process: look at one sprite, then the next, (is it the same?, if it is): look at the next sprite (is it the same?). I know that we can consider this problem to be a concrete example of a more abstract learning problem, but it's not clear if humans can solve such problems without sequential processing. Anyway, this is not a criticism, per se, just food for thought. Yes we agree that the essence of the tasks requires a sequential processing and you can also find this sequential processing in our IKGNN architecture as well (and in deep architectures in general). P1NN looks at each patch and identifies the type of objects inside that patch and P2NN decides if the objects identified by P1NN has a different object. What is less clear is whether humans solve such problems by re-using the same 'hardware' (as in a recurrent net) or by composing different computations (e.g., associated with different areas in the brain). There are few studies that investigates the sequential learning in non-human primates which you might find interesting [3]. [3] Conway, Christopher M., and Morten H. Christiansen. 'Sequential learning in non-human primates.' Trends in cognitive Sciences 5, no. 12 (2001): 539-546.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
TiDHTEGclh1ro
review
1,362,381,600,000
SSnY462CYz1Cu
[ "everyone" ]
[ "anonymous reviewer 858d" ]
ICLR.cc/2013/conference
2013
title: review of Knowledge Matters: Importance of Prior Information for Optimization review: The paper by Gulcehre & Bengio entitled 'Knowledge Matters: Importance of Prior Information for Optimization' presents an empirical study which compares a two-tiered MLP architecture against traditional algorithms including SVM, decision trees and boosting. Images used for this task are 64x64 pixel images containing tetris-like sprite shapes. The proposed task consists in trying to figure out whether all the sprites in the image are from the same category or not (invariant to 2D transformations). The main result from this study is that intermediate guidance (aka building by hand an architecture which 'exploits intermediate level concepts' by dividing the problem in two stages (a classification stage followed by a XOR stage) solves the problem for which a 'naive' neural net (as well as classical machine learning algorithms) fail. Pros: The proposed task is relatively interesting as it offers an alternative to traditional pattern matching tasks used in computer vision. The experiments seem well conducted. The fact that a neural network and other universal approximators do not seem to even get close to learning the task with ~80K training examples is relatively surprising. Cons: The work by Fleuret et al (Comparing machines and humans on a visual categorization test. PNAS 2011) needs to be discussed. This paper focuses on a single task which appears to be a special case from the longer list of 'reasoning' tasks proposed by Fleuret et al. In addition, the proposed study reports a null result, which is of course always a little problematic (the fact that the authors did not manage to train a classical NN to solve the problem does not mean it is impossible). At the same time, the authors have explored reasonably well the space of hyper parameters and seem to have done their best in getting the NN to succeed. Minor points: The structure of the paper is relatively confusing. Sections 1.1 and 2 provide a review of some published work by the authors and does not appear to be needed for understanding the paper. In my view the paper could be shortened or at least most of the opinions/speculations in the introduction should be moved to the discussion section.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
nMIynqm1yCndY
review
1,362,784,440,000
SSnY462CYz1Cu
[ "everyone" ]
[ "David Reichert" ]
ICLR.cc/2013/conference
2013
review: I would like to add some further comments for the purpose of constructive discussion. The authors try to provide further insights into why and when deep learning works, and to broaden the focus of the kind of questions usually asked in this community, in particular by making connections to biological cognition and learning. I think this a good motivation. There are some issues that I would like to address though. At the core of this work is the result that algorithms can fail at solving the given classification task unless 'intermediate learning cues' are supplied. The authors cover many different algorithms to make this point. However, I think it would have been helpful to provide more empirical or theoretical analysis into *why* these algorithms fail, and what makes the task difficult. In particular, at what point does the complexity come in? Is the difficulty of the task qualitative or quantitative? The task would be qualitatively the same with just three patches and three categories of objects, or perhaps even just three multinomial units as input. I would be curious to see at least an empirical analysis into this question, by varying the complexity of the task, not just the types of algorithms and their parameters. As for answering the question of what makes the task difficult, the crux appears to be that the task implicitly requires invariant object recognition: to solve the second stage task (are all objects of the same category?), the algorithm essentially has to solve the problem of invariant object recognition first (what makes a category?). As the authors have shown, given the knowledge about object categories, the second stage task becomes easy to solve. It is interesting that the weak supervision signal provided in stage two alone is not enough to guide the algorithm to discover the object categories first, but I'm not sure that it is that surprising. Once the problem of invariant recognition has been identified, I don't think it is that 'surprising' either that unsupervised learning did not help at all. No matter how much data and how clever the algorithm, there is simply no way for an unsupervised algorithm to discover that a given tetris object and its rotated version are in some sense the same thing. This knowledge is however necessary to solve the subsequent same/different task across categories. An algorithm can only learn invariant object recognition given some additional information, either with explicit supervision or with more structure in the data and some in-built inductive biases (some form of semi-supervised learning). In this light, it is not clear to me how the work relates to specifically 'cultural learning'. The authors do not model knowledge exchange between agents as such, and it is not clear why the task at hand would be one where cultural learning is particularly relevant. The general issue of what knowledge or inductive biases are needed to learn useful representations, in particular for invariant object recognition, is indeed very interesting, and I think seldom addressed in deep learning beyond building in translation invariance. For the example of invariant object recognition, learning from temporal sequences and building in biases about 'temporal coherence' or 'slowness' (Földiák 91, Wiskott & Sejnowski 02) have been suggested as solutions. This has indeed been explored in deep learning at least in one case (Mobahi et al, 09), and might be more appropriate to address the task at hand (with sequential images). I think that if the authors believe that cultural learning is an important ingredient to deep learning or an interesting issue on its own, they perhaps need to find a more relevant task and then show that it can solved with a model that really utilizes cultural learning specifically, not just general supervision. Lastly, an issue I am confused by: if the second stage task (given the correct intermediate results from the first stage) corresponds to an 'XOR-like' problem, how come a single perceptron in the second stage can solve it?
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
PJcXvClTX8vdE
comment
1,363,246,140,000
D5ft5XCZd1cZw
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > It is surprising that structured MLP does chance even on training set. On the other hand with 11 output units per parch this is perhaps not so surprising as the network has to fit everything into minimal representation. However one would expect to get better training set resuts with larger sizes. You should put such results into Table 1 and go to even larger sizes, like 100. We conducted experiments with the structured MLP(SMLP) using 11, 50 and 100 hidden units per patch in the final layer of the locally connected part, yielding to chance performance on both the training and the test set. The revision will have a table listing the results we obtained with different number of hidden units. > To continue on this, if you trained sparse coding with high sparsity on each patch you should get 1 in N representation for each instance (with 11x4x3 or more units). It would be good to see what the P2NN would do with such representation. I think this is the primary missing piece of this work. That's a very nice suggestion and indeed it was already in our list of experiments to investigate. We conducted several experiments by using a one-hot representation for each patch and we put the results on these datasets in the revision. > It is not quite fair to compare to humans as humans have prior knowledge, specifically of rotations, probably learned from seeing objects rotate. Humans are probably doing mental rotation (see [1]) instead of having rotation invariance, which indeed exploits one form of prior knowledge (learned or innate) or another (see [2]). We have modified the statement accordingly. We have also performed an experiment (reported in the revision) in which all the possible bit configurations of a single patch were enumerated (there are 80 of them plus the special case where no object is present). With this input representation, there is no vision prior knowledge that can help a human learner. The task is strangely easier but still difficult: we achieved 25% test error up to now, i.e., better than chance (i.e. 50%) but far from the less than 1% error of the IKGNN. In future work, we will measure how well humans learn that task. > I don't think 'Local descent hypothesis' is quite true. We don't just do local approximate descent. First we do one shot learning in hippocampus. Second, we do search for explanations and solutions and we do planning (both unconsciously and consciously). Sure having more agents helps it's a little like running a genetic algorithm - an algorithm that overcomes local minima. One-shot learning is not incompatible with local approximate descent. For example, allocating new parameters to an example to learn by heart is moving in the descent direction from the point of view of functional gradient descent. Searching for explanations and planning belong to the realm of inference. We have inference in many graphical models while training itself still proceeds by local approximate descent. And you are right that having multiple agents sharing knowledge is like running a genetic algorithm and helps overcome some of the local minima issues. > At the end of page 6 you say P1NN had 2048 units and P2NN 1024 but this is reversed in 3.2.2. Typo? Thanks for pointing to that typo. The numbers in 3.2.2 are correct. [1] Köhler, C., Hoffmann, K. P., Dehnhardt, G., & Mauck, B. (2005). Mental Rotation and Rotational Invariance in the Rhesus Monkey<i>(Macaca mulatta)</i>. Brain, Behavior and Evolution, 66(3), 158-166. [2] Corballis, Michael C. 'Mental rotation and the right hemisphere.' Brain and Language 57.1 (1997): 100-121.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
L8RreQWdPS3jz
review
1,363,278,840,000
SSnY462CYz1Cu
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
review: Replies for the reviewers' comments are prepared by the both authors of the paper: Yoshua Bengio and Caglar Gulcehre.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
D5ft5XCZd1cZw
review
1,361,980,800,000
SSnY462CYz1Cu
[ "everyone" ]
[ "anonymous reviewer ed64" ]
ICLR.cc/2013/conference
2013
title: review of Knowledge Matters: Importance of Prior Information for Optimization review: The paper give an example of a task that neural net solves perfectly when intermediate labels are provided but that is not solved at all by several machine learning algorithms including neural net when the intermediate labels are not provided. I consider the result important. Comments: It is surprising that structured MLP does chance even on training set. On the other hand with 11 output units per parch this is perhaps not so surprising as the network has to fit everything into minimal representation. However one would expect to get better training set resuts with larger sizes. You should put such results into Table 1 and go to even larger sizes, like 100. To continue on this, if you trained sparse coding with high sparsity on each patch you should get 1 in N representation for each instance (with 11x4x3 or more units). It would be good to see what the P2NN would do with such representation. I think this is the primary missing piece of this work. It is not quite fair to compare to humans as humans have prior knowledge, specifically of rotations, probably learned from seeing objects rotate. I don't think 'Local descent hypothesis' is quite true. We don't just do local approximate descent. First we do one shot learning in hippocampus. Second, we do search for explanations and solutions and we do planning (both unconsciously and consciously). Sure having more agents helps - it's a little like running a genetic algorithm - an algorithm that overcomes local minima. At the end of page 6 you say P1NN had 2048 units and P2NN 1024 but this is reversed in 3.2.2. Typo?
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
lLgil9MwiZ3Vu
review
1,363,246,680,000
SSnY462CYz1Cu
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
review: We have uploaded the revision of the paper to arxiv. The revision will be announced by Arxiv soon.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
OblAf-quHwf1V
comment
1,363,246,380,000
nMIynqm1yCndY
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > However, I think it would have been helpful to provide more empirical or theoretical analysis into *why* these algorithms fail, and what makes the task difficult. In particular, at what point does the complexity come in? Is the difficulty of the task qualitative or quantitative? The task would be qualitatively the same with just three patches and three categories of objects, or perhaps even just three multinomial units as input. I would be curious to see at least an empirical analysis into this question, by varying the complexity of the task, not just the types of algorithms and their parameters. We have done more experiments to explore the effect of the difficulty of the task. In particular, we considered three settings aimed at making the task gradually easier: (1) map each possible patch-level input vector into an integer (one out of 81=(1 for no-object + 10 x 4 x 2) and a corresponding one-hot 80-bit input vector (and feed the concatenation of these 64 vectors as input to a classifier), (2) map each possible patch-level input vector into a disentangled representation with 3 one-hot vectors (10 bits + 4 bits + 2 bits) in which the class can be read directly (and one could imagine as the best possible outcome of unsupervised pre-training), and (3) only retain the actual object categories (with only the first 10 bits per patch, for the 10 classes). We found that (2) and (3) can be learned perfectly while (1) can be partially learned (down to about 30% error with 80k training examples). So it looks like part of the problem (as we had surmised) is to separate class information from the factors, while somehow the image-like encoding is actually harder to learn from (probably an ill-conditioning problem) than the one-hot encoding per patch. > As for answering the question of what makes the task difficult, the crux appears to be that the task implicitly requires invariant object recognition: to solve the second stage task (are all objects of the same category?), the algorithm essentially has to solve the problem of invariant object recognition first (what makes a category?). As the authors have shown, given the knowledge about object categories, the second stage task becomes easy to solve. It is interesting that the weak supervision signal provided in stage two alone is not enough to guide the algorithm to discover the object categories first, but I'm not sure that it is that surprising. Visually much more complex tasks are being rather successfully handled with deep convolutional nets, as in the recent work by Kryzhevski & Hinton at NIPS 2012. It is therefore surprising that such a simplified task would make most learning algorithms fail. We believe it boils down to an optimization issue (the difficulty of training the lower layers well, in spite of correct supervised learning gradients being computed through the upper layers) and our experiments are consistent with that hypothesis. The experiments described above with disentangled inputs suggest that if unsupervised learning was doing an optimal job, it should be possible to solve the problem. > In this light, it is not clear to me how the work relates to specifically 'cultural learning'. The authors do not model knowledge exchange between agents as such, and it is not clear why the task at hand would be one where cultural learning is particularly relevant. The general issue of what knowledge or inductive biases are needed to learn useful representations, in particular for invariant object recognition, is indeed very interesting, and I think seldom addressed in deep learning beyond building in translation invariance. For the example of invariant object recognition, learning from temporal sequences and building in biases about 'temporal coherence' or 'slowness' (Földiák 91, Wiskott & Sejnowski 02) have been suggested as solutions. This has indeed been explored in deep learning at least in one case (Mobahi et al, 09), and might be more appropriate to address the task at hand (with sequential images). I think that if the authors believe that cultural learning is an important ingredient to deep learning or an interesting issue on its own, they perhaps need to find a more relevant task and then show that it can solved with a model that really utilizes cultural learning specifically, not just general supervision The main difficulty of this task stems from the composition of two distinct tasks, the first task is the invariant object recognition and second task is learning the logical relation between the objects in the image. Each task can be solved fairly easily on its own, otherwise IKGNN couldn't learn this task. But we claim that combination of these two tasks raises an optimization difficulty that the machine learning algorithms that we have tried failed to overcome. We are aware that slow features might be useful for solving this task and we plan to investigate that as well. We also believe that as such, temporal coherence would be a much more plausible explanation as to how humans learn such visual tasks, since humans learn to see quite well with little or no verbal cues from parents or teachers (and of course, all the other animals that have very good vision do not have a culture or one nearly as developed as that of humans). On the other hand, we believe that this kind of two-level abstraction learning problem illustrates a more general training difficulty that humans may face when trying to learn higher level abstractions (precisely of the kind that we need teachers for). Unfortunately there is not yet much work combining cultural learning and deep learning. This paper is meant to lay the motivational grounds for such work, by showing simple examples where we might need cultural learning and where ordinary supervised learning (without intermediate concepts guidance) or even unsupervised pre-training face a very difficult training challenge. The other connection is that these experiments are consistent with aspects of the cultural learning hypotheses laid down in Bengio 2012: if learning more abstract concepts (that require a deeper architecture that captures distinct abstractions, as in our task) is a serious optimization challenge, this challenge could also be an issue for brains, making it all the more important to explain how humans manage to deal with such problems (presumably thanks to the guidance of other humans, e.g., by providing hints about intermediate abstractions). We wanted to show that there are problems that are inherently hard for current machine learning algorithms and motivate cultural learning: distributed and parallelized learning of such higher level concepts might be more efficient for solving this kind of tasks. > Lastly, an issue I am confused by: if the second stage task (given the correct intermediate results from the first stage) corresponds to an 'XOR-like' problem, how come a single perceptron in the second stage can solve it? In the second stage THERE ARE HIDDEN UNITS. It is not a simple perceptron but a simple MLP. We have used a RELU MLP with 2048 hidden units and a sigmoid output trained with a crossentropy training objective.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
6s7Ys8Q5JbfHZ
review
1,362,262,800,000
SSnY462CYz1Cu
[ "everyone" ]
[ "anonymous reviewer dfef" ]
ICLR.cc/2013/conference
2013
title: review of Knowledge Matters: Importance of Prior Information for Optimization review: In this paper, the authors provide an exposition of curriculum learning and cultural evolution as solutions to the effective local minimum problem. The authors provide a detailed set of simulations that support a curriculum theory of learning, which rely on a supervisory training signal of intermediate task variables that are relevant for the task. Pros: This work is important to probe the limitations of current algorithms, especially as the deep learning field continues to have success. A great thing about this paper is that it got me thinking about new classes of algorithms that might effectively solve the mid-level optimization and more effective strategies for training deep networks for practical tasks. The simulations are well described and compelling. Cons: The exposition on curriculum learning could be condensed. (minor) The demonstrative problem (sprite counting) is a visual perception problem and therefore carries with it the biases of our own perception and inferred strategies. Maybe the overall argument might be bolstered by the addition of a more abstract example? Here are some questions: why so many image regions? Why use an 8x8 grid? Won't 3 regions suffice to make the point? Or is this related to the complexity of the problem. A related question: how are the results effected by the number of these regions? Maybe some reduced tests at the extremes would be interesting, i.e. with only 3 regions, and 32 (you have 64 already)? In the networks that solve the task, are the weights that are learned symmetric over the image regions? i.e. are these weights identical (maybe up to some scaling and sign flip). Is there anything you have determined about the structure of the learned second layer of the IKGNN? Furthermore, what about including a 'weight sharing' constraint in the general MLP model (the one that does not solve the problem, but has the same structure as the one that does)? Would including this constraint change the solution? (the constraint is already in the P1NN, but what about adding it into the P2NN?) Another way to ask this is: Is enforcing translation invariance in the network sufficient to achieve good performance, or do we need to specifically train for the sprite discrimination? A technical point about the assumption of human performance on this task: Do we know if humans can solve this problem 'in a glance?': flashing the image for a small amount of time ~100-200msecs. Either with or without a mask? It seems that the networks you have derived are solving such a problem 'in a glance.' A more meta comment: Is there an argument to be made that the sequential nature of language allows humans to solve this task? Even the way you formulate the problem suggests this sequential process: 'are all of the sprites in the image the same?': in other words 'find the sprites, then decide if they are the same' When I imagine solving this problem myself, I imagine performing a more sequential process: look at one sprite, then the next, (is it the same?, if it is): look at the next sprite (is it the same?). I know that we can consider this problem to be a concrete example of a more abstract learning problem, but it's not clear if humans can solve such problems without sequential processing. Anyway, this is not a criticism, per se, just food for thought.
SSnY462CYz1Cu
Knowledge Matters: Importance of Prior Information for Optimization
[ "Çağlar Gülçehre", "Yoshua Bengio" ]
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via a form of supervision or guidance using a curriculum. The experiments we have conducted provide positive evidence in favor of this hypothesis. In our experiments, a two-tiered MLP architecture is trained on a dataset with 64x64 binary inputs images, each image with three sprites. The final task is to decide whether all the sprites are the same or one of them is different. Sprites are pentomino tetris shapes and they are placed in an image with different locations using scaling and rotation transformations. The first part of the two-tiered MLP is pre-trained with intermediate-level targets being the presence of sprites at each location, while the second part takes the output of the first part as input and predicts the final task's target binary event. The two-tiered MLP architecture, with a few tens of thousand examples, was able to learn the task perfectly, whereas all other algorithms (include unsupervised pre-training, but also traditional algorithms like SVMs, decision trees and boosting) all perform no better than chance. We hypothesize that the optimization difficulty involved when the intermediate pre-training is not performed is due to the {em composition} of two highly non-linear tasks. Our findings are also consistent with hypotheses on cultural learning inspired by the observations of optimization problems with deep learning, presumably because of effective local minima.
[ "sprites", "prior information", "importance", "hypothesis", "experiments", "mlp architecture", "image", "final task", "first part", "knowledge matters" ]
https://openreview.net/pdf?id=SSnY462CYz1Cu
https://openreview.net/forum?id=SSnY462CYz1Cu
MF7RMafDRkF_A
comment
1,363,246,320,000
TiDHTEGclh1ro
[ "everyone" ]
[ "Çağlar Gülçehre" ]
ICLR.cc/2013/conference
2013
reply: > The work by Fleuret et al (Comparing machines and humans on a visual categorization test. PNAS 2011) needs to be discussed. This paper focuses on a single task which appears to be a special case from the longer list of 'reasoning' tasks proposed by Fleuret et al. Yes we agree that some of the tasks in the Fleuret et al. paper are similar to our task. We cited this paper in the new revision. Thanks for pointing it out. The biggest difference between the Fleuret et al paper and our approach is that we purposely did not use any preprocessing, in order to make the task *difficult* and show the limitations of a vast range of learning algorithms. This highlights differences between the goals of those papers, of course. > In addition, the proposed study reports a null result, which is of course always a little problematic (the fact that the authors did not manage to train a classical NN to solve the problem does not mean it is impossible). At the same time, the authors have explored reasonably well the space of hyper parameters and seem to have done their best in getting the NN to succeed. We agree with that statement. Nonetheless, negative results (especially when they are confirmed by other labs) can have a powerful impact of research, by highlighting the limitations of current algorithms and thus directing research fruitfully towards addressing important challenges. It is unfortunately more difficult to publish negative results in our community, in part because computer scientists do not have as much as other scientists (such as biologists) the culture of replicating experiments and publishing these validations. > Minor points: The structure of the paper is relatively confusing. Sections 1.1 and 2 provide a review of some published work by the authors and does not appear to be needed for understanding the paper. In my view the paper could be shortened or at least most of the opinions/speculations in the introduction should be moved to the discussion section. We disagree. The main motivation for these experiments was to empirically validate some aspects of the hypotheses discussed in Bengio 2012 on local minima and cultural evolution. If learning more abstract concepts (that require a deeper architecture) is a serious optimization challenge, this challenge could also be an issue for brains, making it all the more important to explain how humans manage to deal with such problems (presumably thanks to the guidance of other humans, e.g., by providing hints about intermediate abstractions).
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
t-wFtMYSdpR8v
comment
1,362,994,020,000
y-XNy_0Refysb
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
reply: Please see the comment below (from March 3rd). We have updated the paper to incorporate your comments.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
y-XNy_0Refysb
review
1,362,214,260,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "anonymous reviewer 9aa4" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Learning of Domain-invariant Image Representations review: This paper focuses on multi-task learning across domains, where both the data generating distribution and the output labels can change between source and target domains. It presents a SVM-based model which jointly learns 1) affine hyperplanes that separate the classes in a common domain consisting of the source and the target projected to the source; and 2) a linear transformation mapping points from the target domain into the source domain. Positive points 1) The method is dead simple and seems technically sound. To the best of my knowledge it's novel, but I'm not as familiar with the SVM literature - I am hoping that another reviewer comes from the SVM community and can better assess its novelty. 2) The paper is well written and understandable 3) The experiments seem thorough: several datasets and tasks are considered, the model is compared to various baselines. The model is shown to outperform contemporary domain adaption methods, generalize to novel test categories at test time (which many other methods cannot do) and can scale to large datasets. Negative points I have one major criticism: the paper doesn't seem really focused on representation learning - it's more a paper about a method for multi-task learning across domains which learns a (shallow, linear) mapping from source to target. I agree - it's a representation but there's no real analysis or focus on the representation itself - e.g. what is being captured by the representation. The method is totally valid, but I just get the sense that it's a paper that could fit well with CVPR or ICCV (i.e. a good vision paper) where the title says 'represention learning', and a few sentences highlight the 'representation' that's being learned, however the method nor the paper's focus is really on learning interesting representations. On one hand I question its suitability for ICLR and it's appeal to the community (compared to CVPR/ICCV, etc.) but on the other hand, I think it's great to encourage diversity in the papers/authors at the conference and having a more 'visiony'-feeling paper is not a bad thing. Comments -------- Can you state up front what is meant by the asymmetry of the transform (e.g. when it's first mentioned)? Later on in the paper it becomes clear that it has to do with the source and target having different feature dimensions but it wasn't obvious to me at the beginning of the paper. Just before Eq (4) and (5) it says that 'we begin by rewriting Eq 1-3 with soft constraints (slack)'. But where are the slack variables in Eq 4?
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
u3MkubcB_YIB0
review
1,362,393,540,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "anonymous reviewer feb2" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Learning of Domain-invariant Image Representations review: This paper proposes to make domain adaptation and multi-task learning easier by jointly learning the task-specific max-margin classifiers and a linear mapping from a new target space to the source space; the loss function encourages the mapped features to lie on the correct side of the hyperplanes learned for each task of the hyperplanes of the max-margin classifiers. Experiments show that the mapping performs as well or better as existing domain adaptation methods, but can scale to larger problems while many earlier approaches are too costly. Overall the paper is clear, well-crafted, and the context and previous work are well presented. The idea is appealing in its simplicity, and works well. Pros: the idea is intuitive and well justified; it is appealing that the method is flexible and can tackle cases where labels are missing for some categories. The paper is clear and well-written. Experimental results are convincing enough; while the results are not outperforming the state of the art (results are within the standard error of previously published performance), the authors' argument that their method is better suited to cases where domains are more different seems reasonable and backed by their experimental results. Cons: this method would work only in cases where a simple general linear rotation of features would do a good job placing features in a favorable space. The method also gives a privileged role to the source space, while methods that map features to a common latent space have more symmetry; the authors argue that it is hard to guess the optimal dimension of the latent space -- but their method simply constrains it to the size of the source space, so there is no guarantee that this would be any more optimal.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
tWNADGgy0XWy2
comment
1,362,971,700,000
u3MkubcB_YIB0
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your review. In this paper we present a method that learns an asymmetric linear mapping between the source and target feature spaces. In general, the feature transformation learning can be kernelized (the optimization framework can be formulated as a standard QP). However, for this work we focus on the linear case because of it's scalability to a large number of data points. We show that using the linear framework we perform as well or better than other methods which learn a non-linear mapping. We learn a transformation between the target and source points which can be expressed by the matrix W in our paper. In this paper, we use this matrix to compute the dot product in the source domain between theta_k and the transformed target points (Wx^t_i). However, if we think of W (an asymmetric matrix) as begin decomposed as W = A'B, then the dot product function can be interpreted as theta_k'A'Bx^t_i. In other words it could be interpreted as the dot product in some common latent space between source points transformed by A and target points transformed by B. We propose learning the W matrix rather than A,B directly so that we do not have to specify the dimension of the latent space.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
JnShnsXduOpVA
review
1,362,367,200,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
review: Thank you for your feedback. We argue that the task of adapting representations across domains is one that is common to all representation learning challenges, including those based on deep architectures, metric learning methods, and max-margin transform learning. Our insight into this problem is to use the source classifier to inform the representation learned for the target data. Specifically, we jointly learn a source domain classifier and a representation for the target domain, such that the target points can be well classified in the source domain. We present a specific algorithm using an SVM classifier and testing on visual domains, however the principles of our method are applicable to both a range of methods for learning and classification (beyond SVM) as well as a range of applications (beyond vision). In addition, thank you for your comments section. We will clarify what is meant by asymmetric transform and modify the wording around equations (4-5) to reflect the math shown, which has soft constraints and no slack variables.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
Ua0HJI2r-Waro
review
1,362,783,240,000
BBIbj9w8Lvj8F
[ "everyone" ]
[ "anonymous reviewer 36a3" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Learning of Domain-invariant Image Representations review: The paper presents a new method for learning domain invariant image representations. The proposed approach simultaneously learns a linear mapping of the target features into the source domain and the parameters of a multi-class linear SVM classifier. Experimental evaluations show that the proposed approach performs similarly or better than previous art. The new algorithm presents computational advantages with respect to previous approaches. The paper is well written and clearly presented. It addresses an interesting problem proposing that has received attention in recent years. The proposed method is considerably simpler than competitive approaches with similar (or better) performance (in the setting of the reported experiments). The method is not very novel but manages to improve some drawbacks of previous approaches. Pros: - the proposed framework is fairly simple and the provided implementation details makes it easy to reproduce - experimental evaluation is presented, comparing the proposed method with several competing approaches. The amount of empirical evidence seems sufficient to back up the claims. Cons: - Being this method general, I think that it would have been very good to include an example with more distinct source and target feature spaces (e.g. text categorization), or even better different modalities. Comments: In the work [15], the authors propose a metric that measures the adaptability between a pair of source and target domains. In this setting if several possible source domains are available, it selects the best one. How could this be considered in your setting? In the first experimental setting (standard domain adaptation problem), I understand that the idea the experiment is to show how the labeled data in the source domain can help to better classify the data in the target domain. It is not clear to me how the SVM trained with training data, SVM_t, of the target domain. Is this done only with the limited set of labeled data in the target domain? What is the case for the SVM_s? Looking to the last experimental setting, I suppose that the SVM_s (trained using source training data) also includes the transformed data from the target domain. Otherwise, I don't understand how the performance can increase by increasing the number of labeled target examples.
BBIbj9w8Lvj8F
Efficient Learning of Domain-invariant Image Representations
[ "Judy Hoffman", "Erik Rodner", "Jeff Donahue", "Kate Saenko", "Trevor Darrell" ]
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.
[ "image representations", "domain", "efficient learning", "algorithm", "representations", "domain mismatch", "linear classifiers", "linear transformation", "features", "target" ]
https://openreview.net/pdf?id=BBIbj9w8Lvj8F
https://openreview.net/forum?id=BBIbj9w8Lvj8F
FPpzPM-IHKPkZ
comment
1,362,971,820,000
Ua0HJI2r-Waro
[ "everyone" ]
[ "Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your feedback. We would like to start by clarifying a few points from your comments section. First, our first experiment (standard domain adaptation setting) SVM_t is the classifier learned from being trained with only the limited available data from the target domain. So, for example when we're looking at the shift between amazon to webcam (a->w) we have a lot of training data from amazon and a very small amount of the webcam dataset. SVM_t for this example would be an SVM trained on just the small amount of data from webcam. Note that in the new category experiment setting it is not possible to train SVM_t because there are some categories that have no labeled examples in the target. Second, for our last experiment, SVM_s does not (and should not) change as the number of points in the target is increased. SVM_s is an SVM classifier trained using only source data. In the figure it is represented by the dotted cyan line, which remains constant (at around 42%) as the number of labeled target examples grows. As a third point, if we did have a metric to determine the adaptability of a (source,target) domain pair then we could simply choose to use the source data which is most adaptable to our target data. However, [15] provides a metric to determine a 'distance' between the source and target subspace, not necessarily an adaptability metric. The two might be correlated depending on the adaptation algorithm you use. Namely, if a (source,target) pair are 'close' you might assume they are easily adaptable. But, with our method we learn a transformation between the two spaces, so it's possible for a (source,target) pair to initially be very different according to the metric from [15], but be very adaptable. For example: in [15] the metric said that Caltech was most similar to Amazon, followed by Webcam, followed by Dslr. However, if you look at Table 1 you see that we received higher accuracy when adapting between dslr->caltech then from webcam->caltech. So even though webcam was initially more similar to caltech than dslr to caltech, we find that dslr is more 'adaptable' to caltech. Finally, the idea of using more definite domains or even different modalities is very interesting to us and is something we are considering for future work. We do feel that the experiments we present do justify our claims that our algorithm performs comparable or better than state of the art techniques and is simultaneously applicable to a larger variety of possible adaptation scenarios.
OVyHViMbHRm8c
Visual Objects Classification with Sliding Spatial Pyramid Matching
[ "Hao Wooi Lim", "Yong Haur Tay" ]
We present a method for visual object classification using only a single feature, transformed color SIFT with a variant of Spatial Pyramid Matching (SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an ensemble of linear regression (provided by LINEAR) to obtained state of the art result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead of dividing an image into K number of regions, a subwindow of fixed size is slide around the image with a fixed step size. For each subwindow, a histogram of visual words is generated. To obtained the visual vocabulary, instead of performing K-means clustering, we randomly pick N exemplars from the training set and encode them with a soft non-linear mapping method. We then trained 15 models, each with a different visual word size with linear regression. All 15 models are then averaged together to form a single strong model.
[ "visual objects classification", "spatial pyramid", "spatial pyramid matching", "spm", "sspm", "linear regression", "image", "subwindow", "models", "visual object classification" ]
https://openreview.net/pdf?id=OVyHViMbHRm8c
https://openreview.net/forum?id=OVyHViMbHRm8c
-MIjMM8a1GMYx
review
1,362,430,380,000
OVyHViMbHRm8c
[ "everyone" ]
[ "anonymous reviewer 9ba5" ]
ICLR.cc/2013/conference
2013
title: review of Visual Objects Classification with Sliding Spatial Pyramid Matching review: Summary of contributions: The paper presented a method to achieve a state-of-the-art accuracy on the object recognition benchmark Caltech101. The method used two major ingredients: 1. a sliding window of histograms (called sliding spatial pyramid matching) , 2. randomized vocabularies to generate different models and combine them. The authors claimed that, using only one image feature (transformed color SIFT), the method achieved really good results on Caltech101. Assessment of novelty and quality: Though the accuracy looks impressive, the paper offers limited research value to the machine learning community. The success is largely engineering, lacking insights that are informative to readers. The sliding window representation does not explore multiple scales. Therefore I don't understand why it is still called 'pyramid'. I hope the authors would try the methods on large-scale datasets like ImageNet. If good result obtained, then the work will be of great value to application.
OVyHViMbHRm8c
Visual Objects Classification with Sliding Spatial Pyramid Matching
[ "Hao Wooi Lim", "Yong Haur Tay" ]
We present a method for visual object classification using only a single feature, transformed color SIFT with a variant of Spatial Pyramid Matching (SPM) that we called Sliding Spatial Pyramid Matching (SSPM), trained with an ensemble of linear regression (provided by LINEAR) to obtained state of the art result on Caltech-101 of 83.46%. SSPM is a special version of SPM where instead of dividing an image into K number of regions, a subwindow of fixed size is slide around the image with a fixed step size. For each subwindow, a histogram of visual words is generated. To obtained the visual vocabulary, instead of performing K-means clustering, we randomly pick N exemplars from the training set and encode them with a soft non-linear mapping method. We then trained 15 models, each with a different visual word size with linear regression. All 15 models are then averaged together to form a single strong model.
[ "visual objects classification", "spatial pyramid", "spatial pyramid matching", "spm", "sspm", "linear regression", "image", "subwindow", "models", "visual object classification" ]
https://openreview.net/pdf?id=OVyHViMbHRm8c
https://openreview.net/forum?id=OVyHViMbHRm8c
mqGM7L9xJ-7Oz
review
1,362,272,400,000
OVyHViMbHRm8c
[ "everyone" ]
[ "anonymous reviewer 9dc6" ]
ICLR.cc/2013/conference
2013
title: review of Visual Objects Classification with Sliding Spatial Pyramid Matching review: This paper replaces the spatial pyramidal pooling in a spatial pyramid pooling by a sliding-window style pooling. By using this method and color SIFT descriptors, state-of-the-art results are obtained on the Caltech-101 dataset (83.5% accuracy). The contribution in this paper would be rather slight as is, but this is all the more true since it seems the idea of using sliding window pooling has already appeared in an older paper, with good results (they call the sliding windows 'components'): 'A Boosting Sparsity Constrained Bi-Linear Model for Object Recognition' IEEE Multimedia 2012 Chunjie Zhang Jing Liu ; Qi Tian ; Yanjun Han ; Hanqing Lu ; Songde Ma Simply using it with color SIFT descriptors does not constitute enough novelty for accepting this paper.
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
MUE4IYdQ_XMbN
review
1,360,788,360,000
GgtWGz7e5_MeB
[ "everyone" ]
[ "anonymous reviewer 5328" ]
ICLR.cc/2013/conference
2013
title: review of Joint Space Neural Probabilistic Language Model for Statistical Machine Translation review: The paper describes a Bayesian nonparametric HMM augmented with a hierarchical Pitman-Yor language model and slightly extends it by introducing conditioning on auxiliary inputs, possibly at each timestep. The observations are used for incorporating information from a separately trained model, such as LDA. In spite of the title and the abstract the paper has nothing to do with neural language models and very little with representation learning, as the author bizarrely uses the term NPLM to refer to the above n-gram HMM model. The model is evaluated as a part of a machine translation pipeline. This is a very poorly written paper. The quality of writing makes it at times very difficult to understand what exactly has been done. The paper makes no significant contributions from the machine learning standpoint, as what the author calls the 'n-gram HMM' is not novel, having been introduced by Blunsom & Cohn in 2011. The only material related to representation learning is not new either as it involves running LDA on documents. The rest of the paper is about tweaking a translation pipeline and is far too specialized for ICLR. Reference: Blunsom, Phil, and Trevor Cohn. 'A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction.' Proceedings of the 49th Annual Meeting of the ACL, 2011
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
A6lxA54Jzv1yo
review
1,362,021,540,000
GgtWGz7e5_MeB
[ "everyone" ]
[ "anonymous reviewer a273" ]
ICLR.cc/2013/conference
2013
title: review of Joint Space Neural Probabilistic Language Model for Statistical Machine Translation review: Author proposes 'n-gram HMM language model', which is inconsistent with the name of the paper. Also, the introduction is confusing and misleading. Overall the paper presents weak results. For example in Section 4 - the perplexity results are insignificantly better than from n-gram models, and most importantly are not reproducible: it is not even mentioned what is the amount of the training data, what is the order of n-gram models etc. Author uses irsltm, altough Srilm is cited too (giving it credit for n-gram language modeling, for some unknown reason); overall, many citations are unjustified, and unrelated to the paper itself (probably the only reason is to make everyone happy). 0.2 bleu improvement is generally supposed to be insignificant. I don't see any useful information in the paper that can help others to improve their work (rather opposite). Unless author can obtain better results (which I honestly believe is not possible, with the explored approach), I don't see a reason why this work should be published.
GgtWGz7e5_MeB
Joint Space Neural Probabilistic Language Model for Statistical Machine Translation
[ "Tsuyoshi Okita" ]
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
[ "nplm", "language model", "statistical machine translation", "smt", "idea", "better perplexity", "smoothed language models" ]
https://openreview.net/pdf?id=GgtWGz7e5_MeB
https://openreview.net/forum?id=GgtWGz7e5_MeB
Ezy1znNS-ZwLb
review
1,361,986,980,000
GgtWGz7e5_MeB
[ "everyone" ]
[ "anonymous reviewer 5a64" ]
ICLR.cc/2013/conference
2013
title: review of Joint Space Neural Probabilistic Language Model for Statistical Machine Translation review: To quote the authors, this paper introduces a n-gram-HMM language model as neural probabilistic language model (NPLM) using the non-parametric Bayesian construction. This article is really confused and describes a messy mix of different approaches. At the end, it is very hard to understand what the author wanted to do and what he has done. This paper can be improved in many ways before it could be published: authors must clarify their motivations; the interaction between neural and HMM models could be describe more precisely; In the reading order: In the introduction, authors mistake the MERT process with the translation process. While MERT is used to tune the weights of a log-linear combination of models in order to optimize BLEU for instance, the NPLM are used as an additionnal model to re-rank nbest lists. Moreover, the correct citation for MERT is the ACL 2003 paper of F. Och. In section 2, the authors introduce a HMM language model. A lot of questions remain: What does the hidden states intend to capture ? What is the motivation ? What is the joint distribution associated to the graphical model of figure 1 ? How the word n-gram distributions are related to the hidden states ? In section 3, the authors enhance their HMM LM with an additional row of hidden states (joint space HMM). At this point the overall goal of the paper is for me totally unclear. For the following experimental sections, a lot of information on the set put is missing. Experiments cannot be reproduced given based on the content of that paper. For example, the intrinsic evaluation introduces ngram-HMM with one or two features. The very confused explanation of these features is provided further in the article. Authors do not describe the data-set (there are a lot of europarl version), nor the order of the LMs under consideration. In section 5, the following sentence puzzled me: 'Note that although this experiment was done using the ngram-HMM language model, any NPLM may be sufficient for this purpose. In this sense, we use the term NPLM instead of ngram-HMM language model.' Moreover, the first feature is derived from a NPLM, but how this NPLM is learnt, on which dataset, what are the parameters, the order of the model and how this feature is derived. I could not find the answers in this article. The rest of the paper is more and more unclear. At the end, authors shows a BLEU improvement of 0.2 on a system combination task. While I don't understand the models used, the gain is really small and I wonder if it is significant. For comparison's sake, MBR decoding usually provide a BLEU improvement of at least 0.2.
6ZY7ZnIK7kZKy
An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance
[ "Momiao Xiong", "Long Ma" ]
Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset
[ "genetic variants", "variants", "genomic", "information", "clinical significance", "clinical significance fast", "cheaper next generation", "technologies", "massive" ]
https://openreview.net/pdf?id=6ZY7ZnIK7kZKy
https://openreview.net/forum?id=6ZY7ZnIK7kZKy
x8ctSDlKbu8KB
review
1,362,277,800,000
6ZY7ZnIK7kZKy
[ "everyone" ]
[ "anonymous reviewer 1ff5" ]
ICLR.cc/2013/conference
2013
title: review of An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance review: Summary of the paper: This paper proposes a sparse extension of sufficient dimension reduction (the problem of findind a linear subspace so that the output and the input are conditionally independent given the projection of the input onto that subspace). The sparse extension is formulated through the eigenvalue formulation of sliced inverse regression. The method is finally applied to identifying genetic variants of clinical significance. Comments: -Other sparse formulations of SIR have been proposed and the new method should be compared to it (see two below) Lexin Li, Christopher J. Nachtsheim, Sparse Sliced Inverse Regression, Technometrics. Volume 48, Issue 4, 2006 Lexin Li. Sparse sufficient dimension reduction. Biometrika (2007) 94(3): 603-613 -In experiments, it would have been nice to see another method run on these data. -The paper appears out of scope for the conference
6ZY7ZnIK7kZKy
An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance
[ "Momiao Xiong", "Long Ma" ]
Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset
[ "genetic variants", "variants", "genomic", "information", "clinical significance", "clinical significance fast", "cheaper next generation", "technologies", "massive" ]
https://openreview.net/pdf?id=6ZY7ZnIK7kZKy
https://openreview.net/forum?id=6ZY7ZnIK7kZKy
rLP-LGuBmzyRt
review
1,362,195,240,000
6ZY7ZnIK7kZKy
[ "everyone" ]
[ "anonymous reviewer 34e0" ]
ICLR.cc/2013/conference
2013
title: review of An Efficient Sufficient Dimension Reduction Method for Identifying Genetic Variants of Clinical Significance review: The paper describes the application of a supervised projection method (Sufficient Dimension Reduction - SDR) for a regression problem in bioinformatics. SDR attempts to find a linear projection space such that the response variable depends on the linear projection of the inputs. The authors make a brief presentation of SDR and formulate it as an optimal scoring problem. It takes the form of a constrained optimization problem which can be solved using an alternating minimization procedure. This method is then applied to prediction problems in Bioinformatics. The form and organization of the paper are not adequate. The projection method is only briefly outlined. Notations are not correct e.g. the same notations are used for random variables and data matrices, some of the notations or abbreviations are not introduced. The description of the applications remains extremely unclear. The abstract and the contribution do not correspond. The format of the paper is not NIPS format. The proposed method is an adaptation of existing work. The formulation of SDR as a constrained problem is not new. The contribution here might be a variant of the alternating minimization technique used for this problem. The application is only briefly sketched and cannot be really appreciated from this description. Pro Describes an application of SDR, better known in the statistical community, which is an alternative to other matrix factorization techniques used in machine learning. Cons Form and organization of the paper Weak technical contribution - algorithmic and applicative
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
BAmMaGEF72a0w
review
1,362,389,460,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 95fc" ]
ICLR.cc/2013/conference
2013
title: review of A Nested HDP for Hierarchical Topic Models review: This paper presents a novel variant of the NCRP process that overcomes the latter's main limitation, namely, that a document necessarily has to use topics from a specific path in the tree. This is accomplished by combining ideas from HDP with the NCRP process, where the entire nCRP tree is replicated for each document where a sample from each DP at each node of the original tree is used as a shared base distribution for each document's own DP. The idea is novel and is an important contribution in the area of unsupervised large scale text modeling. Although the paper is strong on novelty, it seems to be incomplete in terms of presenting any evidence that the model actually works and is better than the original NCRP model. Does it learn better topics than nCRP? Is the new model a better predictor of text? Does it produce a better hierararchy of topics than the original model? Does the better representation of documents translate into better performance on any extrinsic task? Without any preliminary answers to these questions, in my mind, the work is incomplete at best.
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
WZaI2aHNOvDz7
review
1,362,389,640,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 95fc" ]
ICLR.cc/2013/conference
2013
review: no additional comments.
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
cBZ06aJhuH6Nw
review
1,362,389,580,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 95fc" ]
ICLR.cc/2013/conference
2013
review: no additional comments.
N_c1XDpyus_yP
A Nested HDP for Hierarchical Topic Models
[ "John Paisley", "Chong Wang", "David Blei", "Michael I. Jordan" ]
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We demonstrate our algorithm on 1.8 million documents from The New York Times.
[ "nested hdp", "hierarchical topic models", "nhdp", "ncrp", "hierarchical topic modeling", "generalization", "word", "path" ]
https://openreview.net/pdf?id=N_c1XDpyus_yP
https://openreview.net/forum?id=N_c1XDpyus_yP
s3Zn3ZANM4Twv
review
1,362,170,460,000
N_c1XDpyus_yP
[ "everyone" ]
[ "anonymous reviewer 7555" ]
ICLR.cc/2013/conference
2013
title: review of A Nested HDP for Hierarchical Topic Models review: The paper introduces a natural extension to the nested Chinese Restaurant process, where the main limitation was that a single path for the tree (from the root to a leaf) is chosen for each individual document. In this work, a document specific tree is drawn (with associated switching probabilities) which is then used to generate words in the document. Consequently, the words can represent very different topics not necessarily associated with the same path in the tree. Though the work is clearly interesting and important for the topic modeling community, the workshop paper could potentially be improved. The main problem is clearly the length of the submission which does not provide any kind of details (less than 2 pages of content). Though additional information can be found in the cited arxiv paper, I think it would make sense to include in the workshop paper at least the comparison in terms of perplexity (showing that it substantially outperforms nCRP) and maybe some details on efficiency of inference. Conversely, the page-long Figure 2 could be reduced or removed to fit the content. Overall, the work is quite interesting and seems to be a perfect fit for the conference. Given that an extended version is publicly available, I do not think that the above comments are really important. Pros: -- a natural extension of the previous model which achieves respectable results on standard benchmarks (though results are not included in the submissions) Cons: -- a little more information about the model and its performance could be included even in a 3-page workshop paper.
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
NFxrNAiI-clI8
review
1,361,169,180,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "anonymous reviewer 1860" ]
ICLR.cc/2013/conference
2013
title: review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks review: This paper is by the group that did the first large-scale speech recognition experiments on deep neural nets, and popularized the technique. It contains various analysis and experiments relating to this setup. Ultimately I was not really sure what was the main point of the paper. There is some analysis of whether the network amplifies or reduces differences in inputs as we go through the layers; there are some experiments relating to features normalization techniques (such as VTLN) and how they interact with neural nets, and there were some experiments showing that the neural network does not do very well on narrowband data unless it has been trained on narrowband data in addition to wideband data; and also showing (by looking at the intermediate activations) that the network learns to be invariant to wideband/narrowband differences, if it is trained on both kinds of input. Although the paper itself is kind of scattered, and I'm not really sure that it makes any major contributions, I would suggest the conference organizers to strongly consider accepting it, because unlike (I imagine) many of the other papers, it comes from a group who are applying these techniques to real world problems and is having considerable success. I think their perspective would be valuable, and accepting it would send the message that this conference values serious, real-world applications, which I think would be a good thing. -- Below are some suggestions for minor fixes to the paper. eq. 4, prime ( ') missing after sigma on top right. sec. 3.2, you do not explain the difference between average norm and maximum norm. What type of matrix norm do you mean, and what are the average and maximum taken over? after 'narrowband input feature pairs', one of your subscripts needs to be changed.
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
ySpzfXa4-ryCM
review
1,362,161,880,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "anonymous reviewer 778f" ]
ICLR.cc/2013/conference
2013
title: review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks review: * Comments ** Summary The paper uses examples from speech recognition to make the following points about feature learning in deep neural networks: 1. Speech recognition performance improves with deeper networks, but the gain per layer diminishes. 2. The internal representations in a trained deep network become increasingly insensitive to small perturbations in the input with depth. 3. Deep networks are unable to extrapolate to test samples that are substantially different from the training samples. The paper then shows that deep neural networks are able to learn representations that are comparatively invariant to two important sources of variability in speech: speaker variability and environmental distortions. ** Pluses - The work here is an important contribution because it comes from the application of deep learning to real-world problems in speech recognition, and it compares deep learning to classical state-of-the-art approaches including discriminatively trained GMM-HMM models, vocal tract length normalization, feature-space maximum likelihood linear regression, noise-adaptive training, and vector Taylor series compensation. - In the machine learning community, the deep learning literature has been dominated by computer vision applications. It is good to show applications in other domains that have different characteristics. For example, speech recognition is inherently a structured classification problem, while many vision applications are simple classification problems. ** Minuses - There is not a lot of new material here. Most of the results have been published elsewhere. ** Recommendation I'd like to see this paper accepted because 1. it makes important points about both the advantages and limitations of current approaches to deep learning, illustrating them with practical examples from speech recognition and comparing deep learning against solid baselines; and 2. it brings speech recognition into the broader conversation on deep learning. * Minor Issues - The first (unnumbered) equation is correct; however, I don't think that viewing the internal layers as computing posterior probabilities over hidden binary vectors provides any useful insights. - There is an error in the right hand side of the unnumbered equation preceding Equation 4: it should be sigma prime (the derivative), not sigma. - 'Senones' is jargon that is very specific to speech recognition and may not be understood by a broader machine learning audience. - The VTS acronym for vector Taylor series compensation is never defined in the paper. * Proofreading the performance of the ASR systems -> the performance of ASR systems By using the context-dependent deep neural network -> By using context-dependent deep neural network the feature learning interpretations of DNNs -> the feature learning interpretation of DNNs a DNN can interpreted as -> a DNN can be interpreted as whose senone alignment label was generated -> whose HMM state alignment labels were generated the deep models consistently outperforms the shallow -> the deep models consistently outperform the shallow This is reflected in right column -> This is reflected in the right column 3.2 DNN learns more invariant features -> 3.2 DNNs learn more invariant features is that DNN learns more invariant -> is that DNNs learn more invariant since the differences needs to be -> since the differences need to be that the small perturbations in the input -> that small perturbations in the input with the central frequency of the first higher filter bank at 4 kHz -> with the center frequency of the first filter in the higher filter bank at 4 kHz between p_y|x(s_j|x_wb) and p_y|x(s_j|x_nb -> between p_y|x(s_j|x_wb) and p_y|x(s_j|x_nb) Note that the transform is applied before augmenting neighbor frames. -> Note that the transform is applied to individual frames, prior to concatentation. demonstrated through a speech recognition experiments -> demonstrated through speech recognition experiments
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
eMmX26-PXaMJN
review
1,362,128,940,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "anonymous reviewer cf74" ]
ICLR.cc/2013/conference
2013
title: review of Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks review: The paper presents an analysis of performance of DNN acoustic models in tasks where there is a mis-match between training and test data. Most of the results do not seem to be novel, and were published in several papers already. The paper is well written and mostly easy to follow. Pros: Although there is nothing surprising in the paper, the study may motivate others to investigate DNNs. Cons: Authors could have been more bold in ideas and experiments. Comments: Table 1: it would be more convincing to show L x N for variable L and N, such as N=4096, if one wants to prove that many (9) hidden layers are needed to achieve top performance (I'd expect that accuracy saturation would occur with less hidden layers, if N would increase); moreover, one can investigate architectures that would have the same number of parameters, but would be more shallow - for example, first and last hidden layers can have N=2048, and the hidden layer in between can have N=8192 - this would be more fair to show if one wants to claim that 9 hidden layers are better than 3 (as obviously, adding more parameters helps and the current comparison with 1-hidden layer NN is completely unfair as input and output layers have different dimensionality, but one can apply other tricks there to reduce complexity - for example hierarchical softmax in the output layer etc.) 'Note that the magnitude of the majority of the weights is typically very small' - note that this is also related to sizes of the hidden layers; if hidden layers were very small, the weights would be larger (output of neuron is non-linear function of weighted sum of inputs; if there are 2048 inputs that are in range (0,1), then we can naturally expect the weights to be very small) Section 3 rather shows that neural networks are good at representing smooth functions, which is the opposite to what deep architectures were proposed for. Another reason to believe that 9 hidden layers are not needed. The results where DNN models perform poorly on data that were not seen during training are not really striking or novel; it would be actually good if authors would try to overcome this problem in a novel way. For example, one can try to make DNNs more robust by allowing some kind of simple cheap adaptation during test time. When it comes to capturing VTLN / speaker characteristics, it would be interesting to use longer-context information, either through recurrence, or by using features derived from long contexts (such as previous 2-10 seconds). Table 4 compares relative reductions of WER: however, note that 0% is not reachable on Switchboard. If we would assume that human performance is around 5-10% WER, then the difference in relative improvements would be significantly smaller. Also, it is very common that the better the baseline is, the harder it is to gain improvements (as many different techniques actually address the same problems). Also, it is possible that DNNs can learn some weak VTLN, as they typically see longer context information; it would be interesting to see an experiment where DNN would be trained with limited context information (I would expect WER to increase, but also the relative gain from VTLN should increase).
kk_XkMO0-dP8W
Feature Learning in Deep Neural Networks - A Study on Speech Recognition Tasks
[ "Dong Yu", "Mike Seltzer", "Jinyu Li", "Jui-Ting Huang", "Frank Seide" ]
Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper we argue that the difficulty in speech recognition is primarily caused by the high variability in speech signals. DNNs, which can be considered a joint model of a nonlinear feature transform and a log-linear classifier, achieve improved recognition accuracy by extracting discriminative internal representations that are less sensitive to small perturbations in the input features. However, if test samples are very dissimilar to training samples, DNNs perform poorly. We demonstrate these properties empirically using a series of recognition experiments on mixed narrowband and wideband speech and speech distorted by environmental noise.
[ "deep neural networks", "study", "dnns", "feature", "speech recognition tasks", "perform", "shallow networks", "gaussian mixture models", "gmms" ]
https://openreview.net/pdf?id=kk_XkMO0-dP8W
https://openreview.net/forum?id=kk_XkMO0-dP8W
WWycbHg8XRWuv
review
1,362,989,220,000
kk_XkMO0-dP8W
[ "everyone" ]
[ "Mike Seltzer" ]
ICLR.cc/2013/conference
2013
review: We’d like to thank the reviewers for their comments. We have uploaded a revised version of the paper which we believe addresses reviewers’ concerns as well as the grammatical issues and typos. We have revised the abstract and introduction to better establish the purpose of the paper. Our goal is to demonstrate that deep neural networks can learn internal representations that are robust to variability in the input, and that this robustness is maintained when large amounts of training data are used. Much work in DNNs has been on smaller data sets and historically, in speech recognition, large improvements observed on small systems usually do not translate when applied to large-scale state-of-the-art systems. In addition, the paper contrasts DNN-based systems and their “built in” invariance to a wide variety of variability, to GMM-based systems, where algorithms have been designed to combat unwanted variability in a source-specific manner, i.e. they are designed to address a particular mismatch, such as the speaker or the environment. We also believe there is also a practical implication of these results: algorithms for addressing this acoustic mismatch in speaker, environment, or other factors, which are standard and essential for GMM-based recognizers, become far less critical and potentially unnecessary for DNN-based recognizers. We think this is important for both setting future research directions and deploying large-scale systems. Finally, while some of the results have been published previously, we believe the inherent robustness of DNNs to such diverse sources of variability is quite interesting, and is a point that might allude readers unless these results are combined and presented together. We also want to point out that the analysis of sensitivity to the input perturbation and all of the results in Section 6 on environmental robustness are new and previously unpublished. We hope by putting together all these analyses and results in one paper we can provide some insights on the strengths and weaknesses of using a DNN for speech recognition when trained with real world data.
rtGYtZ-ZKSMzk
Tree structured sparse coding on cubes
[ "Arthur Szlam" ]
A brief description of tree structured sparse coding on the binary cube.
[ "cubes", "sparse coding", "sparse", "brief description", "tree", "binary cube" ]
https://openreview.net/pdf?id=rtGYtZ-ZKSMzk
https://openreview.net/forum?id=rtGYtZ-ZKSMzk
7ESq7YWfqMhHk
review
1,362,001,920,000
rtGYtZ-ZKSMzk
[ "everyone" ]
[ "anonymous reviewer 2f02" ]
ICLR.cc/2013/conference
2013
title: review of Tree structured sparse coding on cubes review: summary: This is a 3-page abstract only. It proposes a low-dimensional representation of data in order to impose a tree structure. It relates to other mixed-norm approaches previously proposed in the literature. Experiments on a binarized MNIST show how it becomes robust to added noise. review: I must say I found the abstract very hard to read and would have preferred a longer version to better understand how the model is different from prior work. It's not clear for instance how the proposed approach compares to other denoising methods. It's not clear neither what is the relation between tree-based decomposition and noise in MNIST. Finally, I didn't understand why the model was restricted to binary representations. All this simply says I failed to capture the essence of the proposed approach.
rtGYtZ-ZKSMzk
Tree structured sparse coding on cubes
[ "Arthur Szlam" ]
A brief description of tree structured sparse coding on the binary cube.
[ "cubes", "sparse coding", "sparse", "brief description", "tree", "binary cube" ]
https://openreview.net/pdf?id=rtGYtZ-ZKSMzk
https://openreview.net/forum?id=rtGYtZ-ZKSMzk
axSGN5lBGINJm
review
1,362,831,180,000
rtGYtZ-ZKSMzk
[ "everyone" ]
[ "anonymous reviewer fd41" ]
ICLR.cc/2013/conference
2013
title: review of Tree structured sparse coding on cubes review: The paper extends the widely known idea of tree-structured sparse coding to the Hamming space. Instead for each node being represented by the best linear fit of the corresponding sub-space, it is represented by the best sub-cube. The idea is valid if not extremely original. I’m not sure it has too many applications, though. I think it is more frequent to encounter raw data residing some Euclidean space, while using the Hamming space for representation (e.g., as in various similarity-preserving hashing techniques). Hence, I believe a more interesting setting would be to have W in R^d, while keeping Z in H^K, i.e., the dictionary atoms are real vectors producing best linear fit of corresponding clusters with binary activation coefficients. This will lead to the construction of a hash function. The out-of-sample extension would happen naturally through representation pursuit (which will now be performed over the cube). Pros: 1. A very simple and easy to implement idea extending tree dictionaries to binary data 2. For binary data, it seems to outperform other algorithms in the presented recovery experiment. Cons: 1. The paper reads more like a preliminary writeup rather than a real paper. The length might be proportional to its contribution, but fixing typos and putting a conclusion section wouldn’t harm. 2. The experimental result is convincing, but it’s rather andecdotal. I might miss something, but the author should argue convincingly that representing binary data with sparse tree-structured dictionary is interesting at all, showing a few real applications. The presented experiment on binarized MNIST digit is very artificial.
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
HH0nm6IT6SHZc
comment
1,363,042,920,000
3wmH3H7ucKwu0
[ "everyone" ]
[ "Christian Osendorfer" ]
ICLR.cc/2013/conference
2013
reply: Dear 3338, Thank you for your feedback. In order to give a comprehensive answer, we quote sentences from your feedback and try to respond appropriately. >>> It is not clear what the purpose of the paper is. We suggest that the way unsupervised feature learning methods are evaluated should be extended: A more direct evalution of the learnt representations without subsequent supervised algorithms, and not tied to the task of high-level object classification. >>> The ground truth correspondences of the dataset were found by >>> clustering the image patches to find correspondences. This is not how the description of [R1] with respect to the Ground Truth Data (section II in [R1]) reads. >>> In this paper, simple clustering methods were not >>> compared to such as kmeans ... We added a K-Means experiment to the new version of the paper. We run K-Means (with a soft threshold function) [R2] on the dataset, it performs worse than spGRBM. (This is mentioned in the new version 3 of the paper). >>> Additionally, training in a supervised way makes much more sense >>> for finding correspondences. This is not the question that we are asking. We deliberately avoid any supervised training because we want to investigate purely unsupervised methods. We are not trying to achieve any state-of-the-art results. >>> It is not clear from the paper alone what is considered at match >>> between descriptors We have added some text that describes how a false positive rate for a fixed true positive rate is computed. >>> The preprocessing of the image patches seems different for each >>> method. This could lead to wildly different scales of the input >>> pixels and thus the corresponding representations of the various >>> methods. Could you elaborate why this is something to consider in our setting? >>> In section 3.3 it is mentioned that it is surprising that L1 >>> normalization works better because sparsity hurts classification >>> typically. We don't say that 'sparsity hurts classification typically'. We say the exact opposite (that sparse representations are beneficial for classification) and give a reference to [R3], a paper that you also reference. We say that it is surprising that a sparse representation ('sparse' as produced by spGRBM, not by a normalization scheme) performs better in a distance calculation, because the general understanding is (to our knowledge) that sparse representations suffer more from the curse of dimensionality when considering distances. >>> However, the sparsity in the paper is directly before the distance >>> calculation, and not before being fed as input to a classifier which >>> is a different setup and would thus be expected to behave differently >>> with sparsity. This is the typical setup in which sparsity is found to >>> hurt classification performance because information is being thrown >>> away before the classifier is used. We don't understand what is meant here. Wasn't the gist of [R3] that a sparse encoding is key for good classification results? However, we think that the main point that we wanted to convey in the referred part of the paper was poorly presented. We tried to make the presentation of the analysis part better in the new version (arxiv version 3) of the paper. >>> ...does not appear to apply to a wide audience as other papers have >>> done a comparison of unsupervised methods in the past' Those comparisions are, as explained in the paper, done always in combination with a subsequent supervised classification algorithm on a high-level object classification task. We want to avoid exactly this setting. We think that the paper is relevant for researchers working on unsupervised (feature) learning methods and for researchers working in Computer Vision. A new version (arxiv version 3) of the paper is uploaded on March 11. [R1] M. Brown, G. Hua, and S. Winder. Discriminative learning of local image descriptors. [R2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. [R3] A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quantization.
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
llHR9RITMyCTz
review
1,362,057,120,000
OznsOsb6sDFeV
[ "everyone" ]
[ "anonymous reviewer e954" ]
ICLR.cc/2013/conference
2013
title: review of Unsupervised Feature Learning for low-level Local Image Descriptors review: This paper proposes to evaluate feature learning algorithms by using a low-level vision task, namely image patch matching. The authors compare three feature learning algorithms, GRBM. spGRBM and mcRBM against engineered features like SIFT and others. The empirical results unfortunately show that the learned features are not very competitive for this task. Overall, the paper does not propose any new algorithm and does not improve performance on any task. It does raise an interesting question though which is how to assess feature learning algorithms. This is a core problem in the field and its solution could help a) assessing which feature learning methods are better and b) designing algorithms that produce better features (because we would have better loss functions to train them). Unfortunately, this work is too preliminary to advance our understanding towards the solution of this problem (see below for more detailed comments). Overall quality is fairly poor: there are missing references, there are incorrect claims, the empirical validation is insufficient. Pros -- The motivation is very good. We need to improve the way we compare feature learning methods. -- The filters visualization is nice. Cons -- It is debatable whether the chosen task is any better for assessing the quality of feature learning methods. The paper almost suggested a better solution in the introduction: we should compare across several tasks (from low level vision like matching to high level vision like object classification). If a representation is better across several tasks, then it must capture many relevant properties of the input. In other words, it is always possible to tweak a learning algorithm to give good results on one dataset, but it is much more interesting to see it working well across several different tasks after training on generic natural images, for instance. -- The choice of the feature learning methods is questionable, why are only generative models considered here? The authors do mention that other methods were tried and worked worse, however it is hard to believe that more discriminative approaches work worse on the chosen task. In particular, knowing the matching task it seems that a method that trains using a ranking loss (learning nearby features for similar patches and far away features for distant inputs) should work better. See: H. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video. ICML 2009. -- The overall results are pretty disappointing. Feature learning methods do not outperform the best engineered features. They do not outperform even if the comparison is unfair: for instance the authors use 128 dimensional SIFT but much larger dimensionality for the learned features. Besides, the authors do not take into account time, neither the training time nor the time to extract these features. This would also be considered in the evaluation. More detailed comments: -- Missing references. It is not true that feature learning methods have never been assessed quantitatively without supervised fine tuning. On a low level vision task, I would refer to: Learning to Align from Scratch Gary Huang, Marwan Mattar, Honglak Lee, Erik Learned-Miller. In Advances in Neural Information Processing Systems (NIPS) 25, 2012. Another missing reference is 2011 Memisevic, R. Gradient-based learning of higher-order image features. International Conference on Computer Vision (ICCV 2011). and other similar papers where Memisevic trains features that relate pairs of image patches. --ROC curves should be reported at least in appendix, if not in the main text. -- I do not understand why SIFT results on tab 1 a) differs from those in tab. 1 b).
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
Hu7OueWCO4ur9
review
1,361,947,080,000
OznsOsb6sDFeV
[ "everyone" ]
[ "anonymous reviewer f716" ]
ICLR.cc/2013/conference
2013
title: review of Unsupervised Feature Learning for low-level Local Image Descriptors review: his paper proposes a dataset to benchmark the correspodence problem in computer vision. The dataset consists of image patches that have groundtruth matching pairs (using separate algorithms). Extensive experiments show that RBMs perform well compared to hand-crafted features. I like the idea of using itermediate evaluation metrics to measure the progress of unsupervised feature learning and deep learning. That said, comparing the methods on noisy groundtruth (results of other algorithms) may have some bias. The experiments could be made stronger if algorithms such as Autoencoders or Kmeans (Coates et al, 2011, An Analysis of Single-Layer Networks in Unsupervised Feature Learning) are considered. If we can consider the groundtruth as clean, will supervised learning a deep (convolutional) network using the groundtruth produce better results?
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
J5RZOWF9WLSi0
comment
1,363,043,100,000
llHR9RITMyCTz
[ "everyone" ]
[ "Christian Osendorfer" ]
ICLR.cc/2013/conference
2013
reply: Deare954, thank you for your detailed feedback. We don't argue that the chosen task should replace existing benchmarks. Instead, we think that it supplements these, because it covers aspects of unsupervised feature learning that have been ignored so far. Note that by avoiding any subsequent supervision we not only think of supervised fine tuning of the learnt architecture but rather no supervised learning on the representations at all (e.g. like it is still done in [R2]). This is hopefully clearer in version 3 of the paper, we removed words like 'refinement' and 'fine tuning'. Thank you for pointing out missing references [R1, R2, R3]. We added [R2, R4, R5] to the paper in order to avoid the impression we are not aware of these approaches (we think that R4 fits better than R1 and R5 better than R3). We were, but did not mention these approaches because they are (i) relying on a supervised signal and/or (ii) are concerned with high-level correspondences (we consider faces as high-level entities). Current work investigates some of these methods, because utilizing the available pairing information should be beneficial with respect to a good overall performance. We are not arguing that discriminative methods work worse on this dataset. However, in this paper we are not striving to achieve state-of-the-art results: We investigate a new benchmark for unsupervised learning and test how good existing unsupervised methods do. We tried to make the analysis part in version 3 of the paper more clearer. We don't think that our claims are incorrect: We manage to perform comparable to SIFT when the size of the representation is free. It is not clear if for standard distance computations a bigger representations (in particular a sparse one) is actually an advantage. We also manage to perform better than several well known compact descriptors when we binarize the learnt representations. We also don't think that the evaluation is insufficient. The time to extract the features will be clearly dominated by the SIFT keypoint detector, because computing a new representation given a patch is a sequence of matrix operations. Training times are added to the new version of the paper. ROC curves will be in a larger technical report that describes in more detail the performance of a bigger number of feature learning algorithms (both supervised and unsupervised) on this dataset. Thank you for pointing out a missing experiment, training on general natural image patches (not extracted around keypoints) and then evaluating on the dataset. We are trying to incorporate results for this experiment in the final version of the paper. It should also be very interesting to experiment with the idea of unsupervised alignment [R2], especially as every patch implicitly has already some general alignment information from its keypoint. In Table 1b, SIFT is not normalized and used as a 128 byte descriptor (in Table 1a a 128 double descriptor (with normalized entries) is used). A new version (arxiv version 3) of the paper is uploaded on March 11. [R1] H. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video. [R2] Gary Huang, Marwan Mattar, Honglak Lee, Erik Learned-Miller. Learning to Align from Scratch. [R3] Memisevic, R. Gradient-based learning of higher-order image features. [R4] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. [R5] J. Susskind, R. Memisevic, G. Hinton, and M. Pollefeys. Modeling the joint density of two images under a variety of transformations.
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
rH1Wu2q8W0ujI
comment
1,363,042,680,000
Hu7OueWCO4ur9
[ "everyone" ]
[ "Christian Osendorfer" ]
ICLR.cc/2013/conference
2013
reply: Dear f716, thank you for your feedback. We evaluated more models than shown in Table 1, but they perform not as good as spGRBM so we decided to leave those out (from the Table) in order to avoid clutter. The models are mentioned in section 3.5 of the paper (the arxiv version 2 of the paper). We are currently running experiments with deep convolutional networks to determine how much improvement supervision signals can achieve. We uploaded a new version (on March 11) that changes some bits of the presentation. We also evaluated K-Means on the dataset (it is mentioned under 'Other models', because its performance is below the one frome spGRBM).
OznsOsb6sDFeV
Unsupervised Feature Learning for low-level Local Image Descriptors
[ "Christian Osendorfer", "Justin Bayer", "Patrick van der Smagt" ]
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been quantitatively investigated yet how well unsupervised learning methods can find low-level representations for image patches without any supervised refinement. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
[ "methods", "unsupervised feature", "representations", "descriptors", "impressive results", "wide range", "input modalities", "particular" ]
https://openreview.net/pdf?id=OznsOsb6sDFeV
https://openreview.net/forum?id=OznsOsb6sDFeV
3wmH3H7ucKwu0
review
1,361,968,260,000
OznsOsb6sDFeV
[ "everyone" ]
[ "anonymous reviewer 3338" ]
ICLR.cc/2013/conference
2013
title: review of Unsupervised Feature Learning for low-level Local Image Descriptors review: This paper is a survey of unsupervised learning techniques applied to the unsupervised task of descriptor matching. Various methods such as Gaussian RBMs, sparse RBMs, and mcRBMs were applied to image patches and the resulting feature vectors were used in a matching task. These methods were compared to standard hand-crafted descriptors such as SIFT, SURF, etc. Pros Provides a survey of descriptors for matching pairs of image patches. Cons It is not clear what the purpose of the paper is. The paper compares several learning algorithms on the task of what essentially seems like clustering image patches to find their correspondences. The ground truth correspondences of the dataset were found by clustering the image patches to find correspondences... In this paper, simple clustering methods were not compared to such as kmeans or sparse coding which are less complicated models than RBMs and are meant for finding correspondences. Additionally, training in a supervised way makes much more sense for finding correspondences. It is not clear from the paper alone what is considered at match between descriptors? Is it the distance being below a threshold, the pair of descriptors being closer than any other pair of descriptors, etc.? The preprocessing of the image patches seems different for each method. This could lead to wildly different scales of the input pixels and thus the corresponding representations of the various methods. In section 3.3 it is mentioned that it is surprising that L1 normalization works better because sparsity hurts classification typically. However, the sparsity in the paper is directly before the distance calculation, and not before being fed as input to a classifier which is a different setup and would thus be expected to behave differently with sparsity. This is the typical setup in which sparsity is found to hurt classification performance because information is being thrown away before the classifier is used. Novelty and Quality: This paper is not novel in that it is survey of prior work applied to matching descriptors. It is well written but does not appear to apply to a wide audience as other papers have done a comparison of unsupervised methods in the past, for example: - A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. In Proc. AISTATS, 2011. - A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quanti- zation. In Proc. ICML, 2011.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
mmLgxpNpu1xGP
comment
1,363,216,980,000
kGNDPAwn1jGUc
[ "everyone" ]
[ "Guillaume Alain, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: > It's interesting that in the classical CAE, there is an implicit contractive effect on g() via the side effect of tying the weights whereas in the form of the DAE presented, g() is explicitly made contractive via r(). Have you investigated the effective difference? Not really, no. The results that we have for general autoencoders r does not even assume that r is decomposable into two meaningful steps (encode, decode). However, in our experiments we found better results (due to optimization issues) with untied weights (and a contractive or denoising penalty on the whole of r(.)=decoder(encoder(.)) rather than just the encoder). We have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low. > Although in the caption, you mention the difference between upper/lower and left/right subplots in Fig 4, I would prefer those (model 1/model 2) to be labeled directly on the subplots, it would just make for easier parsing. The section with Figure 4 has been edited and we are now showing only two plots. We have made all the suggested changes regarding typos and form. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
kGNDPAwn1jGUc
review
1,362,214,560,000
-4IA4WgNAy4Wx
[ "everyone" ]
[ "anonymous reviewer f62a" ]
ICLR.cc/2013/conference
2013
title: review of What Regularized Auto-Encoders Learn from the Data Generating Distribution review: Many unsupervised representation-learning algorithms are based on minimizing reconstruction error. This paper aims at addressing the important questions around what these training criteria actually learn about the input density. The paper makes two main contributions: it first makes a link between denoising autoencoders (DAE) and contractive autoencoders (CAE), showing that the DAE with very small Gaussian corruption and squared error is actually a particular kind of CAE (Theorem 1). Then, in the context of the contractive training criteria, it answers the question 'what does an auto-encoder learn about the data-generating distribution': it estimates both the first and second derivatives of the log-data generating density (Theorem 2) as well as other various local properties of this log-density. An important aspect of this work is that, compared to previous work that linked DAEs to score matching, the results in this paper do not require the reconstruction function of the AE to correspond to the score function of a density, making these results more general. Positive aspects of the paper: * A pretty theoretical paper (for representation learning) but well presented in that most of the heavy math is in the appendix and the main text nicely presents the key results * Following the theorems, I like the way in which the various assumptions (perfect world scenario) are gradually pulled away to show what can still be learned about the data-generating distribution; in particular, the simple numerical example (which could be easily re-implemented) is a nice way to connect the abstractness of the result to something concrete Negative aspects of the paper: * Since the results heavily rely on derivatives with respect to the data, they only apply to continous data (extensions to discrete data are mentioned as future work) Comments, Questions -------- It's interesting that in the classical CAE, there is an implicit contractive effect on g() via the side effect of tying the weights whereas in the form of the DAE presented, g() is explicitly made contractive via r(). Have you investigated the effective difference? Minor comments, typos, etc -------------------------- Fig 2 - green is not really green, it's more like turquoise - 'high-capcity' -> 'high-capacity' - the figure makes reference to lambda but at this point in the paper, lambda is yet to be defined objective function for L_DAE (top of p4) - last term o() coming from the Taylor expansion is explicitly discussed in appendix (and perhaps obvious here) but is not explicitly defined in the main text Right before 3.2.4 'high dimensional <data> (such as images)' Although in the caption, you mention the difference between upper/lower and left/right subplots in Fig 4, I would prefer those (model 1/model 2) to be labeled directly on the subplots, it would just make for easier parsing.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
EEBiEfDQjdwft
comment
1,363,217,640,000
1WIBWMxZeG4UP
[ "everyone" ]
[ "Guillaume Alain, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: > I think this is quite an important result. even though limited to this specific type of model As argued in a previous response (to reviewer 4222), we believe that at least at a qualitative level the same is true in general of regularized auto-encoders. We copy here the response: 'We have worked on the denoising/contracting auto-encoders with squared error because we were able to prove our results with them, but we believe that other regularized auto-encoders (even those with discrete inputs) also estimate something related to the score, i.e., the direction in input space in which probability increases the most. The intuition behind that statement can be obtained by studying figure 2: the estimation of this direction arises out of the conflict between reconstructing training examples well and making the auto-encoder as constant (regularized) as possible.' We have added a brief discussion in the conclusion about how we believe these results could be extended to models with discrete inputs, following the tracks of ratio matching (Hyvarinen 2007). We have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low. > I find the experiment shown in Figure 4 somewhat confusing. We have addressed this concern that many of the reviewers had. The whole section 3.2.3 has been edited and we decided to remove two of the plots which may have introduced confusion. Reviewers seem to focus on the difference between the two models and wanted to know why the outcomes were different. They were only different because of the non-convexity of the problem and the dependance on initial conditions (along with the random noise used for training). At the end of the day, the point is that the vector field points in the direction of the energy gradient, and that is illustrated nicely by the two plots left (far and close distance). > Section 3.2.4. I am not clear what is the importance of this section. It seems to state the relationship between the score and reconstruction derivative. Are you referring to section 3.3 ? If you are indeed referring to section 3.2.4, the idea there is that it is possible to start the investigation from a trained DAE where the noise level for the training is unknown to us (but it is known by the person who trained the DAE). In that case, we would be in a situation where we the best that could be done was to recover the energy function gradient up to a scaling constant. > Is it possible to link these results and theory to other forms of auto-encoders, such as sparse auto-encoders or with different type of non-linear activation functions? It would be very useful to have similar analysis for more general types of auto-encoders too. See our first response above. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
1WIBWMxZeG4UP
review
1,362,368,160,000
-4IA4WgNAy4Wx
[ "everyone" ]
[ "anonymous reviewer 7ffb" ]
ICLR.cc/2013/conference
2013
title: review of What Regularized Auto-Encoders Learn from the Data Generating Distribution review: The paper presents a method to analyse how and what the auto-encoder models that use reconstruction error together with a regularisation cost, are learning with respect to the underlying data distribution. The paper focuses on contractive auto-encoder models and also reformulates denoising auto-encoder as a form of contractive auto-encoder where the contraction is achieved through regularisation of the derivative of reconstruction error wrt to the input data. The rest of the paper presents a theoretical analysis of this form of auto-encoders and also provides couple of toy examples showing empirical support. The paper is easy to read and the theoretical analysis is nicely split between the main paper and appendices. The details in the main paper are sufficient for the reader to understand the concept that is presented in the paper. The theory and empirical data show that one can recover the true data distribution if using contractive auto-encoders of the given type. I think this is quite an important result. even though limited to this specific type of model, quantitative analysis of generative capabilities of auto-encoders have been limited. I find the experiment shown in Figure 4 somewhat confusing. The text suggests that the only difference between the two models is their initial conditions and optimisation hyper parameters. Is the main reason due to initial conditions or hyper parameters? Which hyper parameters? Is the difference in initial condition just a different random seed or different type of initialisation of the network? I think this requires more in depth explanation. Is it normal to expect such different solutions depending on initial conditions? Section 3.2.4. I am not clear what is the importance of this section. It seems to state the relationship between the score and reconstruction derivative. Is it possible to link these results and theory to other forms of auto-encoders, such as sparse auto-encoders or with different type of non-linear activation functions? It would be very useful to have similar analysis for more general types of auto-encoders too.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
fftnhM9InbLMv
comment
1,363,217,640,000
CC5h3a1ESBCav
[ "everyone" ]
[ "Guillaume Alain, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: > It would be good to compare these plots with other regularizers and show that getting log(p) for contractive one is somehow advantageous. We have worked on the denoising/contracting auto-encoders with squared error because we were able to prove our results with them, but we believe that other regularized auto-encoders (even those with discrete inputs) also estimate something related to the score, i.e., the direction in input space in which probability increases the most. The intuition behind that statement can be obtained by studying figure 2: the estimation of this direction arises out of the conflict between reconstructing training examples well and making the auto-encoder as constant (regularized) as possible. Other regularizers (e.g. cross-entropy) as well as the challenging case of discrete data are in the back of our minds and we would very much like to extend mathematical results to these settings as well. We have added a brief discussion in the conclusion about how we believe these results could be extended to models with discrete inputs, following the tracks of ratio matching (Hyvarinen 2007). We have also added (in new sec. 3.2.3) a brief discussion of how these new results (on r(x)-x estimating the score) contradict previous interpretations of the reconstruction error of auto-encoders (Ranzato & Hinton NIPS 2007) as being akin to an energy function. Indeed whereas both interpretations agree on having a low reconstruction error at training examples, the score interpretation suggests (and we see it experimentally) other (median) regions that are local maxima of density, where the reconstruction error is also low. > it would be good to know something not in the limit of penalty going to zero We agree. We did a few artificial data experiments. In fact, we ran the experiment shown in section 3.2.2 using values of lambda ranging from 10^-6 to 10^2 to observe the behavior of the optimal solutions when the penalty factor varies smoothly. The optimal solution degrades progressively into something comparable to what is shown in Figure 2. It becomes a series of increasing plateaus matching the density peaks. Regions of lesser density are used to 'catch up' with the fact that the reconstruction function r(x) should be relatively close to x. > Figure 4. - 'Top plots are for one model and bottom plots for another' - what are the two models? It would be good to specify this in the figure, e.g. denosing autoencoders with different initial conditions and parameter settings. We have addressed this concern that many of the reviewers had. The whole section 3.2.3 has been edited and we decided to remove two of the plots which may have introduced confusion. Reviewers seem to focus on the difference between the two models and wanted to know why the outcomes were different. They were only different because of the non-convexity of the problem and the dependance on initial conditions (along with the random noise used for training). At the end of the day, the point is that the vector field points in the direction of the energy gradient, and that is illustrated nicely by the two plots left (far and close distance). > Section 3.2.5 is important and should be written a little more clearly. We have reworked that section (now identified as 3.2.6), to emphasize the main point: whereas Vincent 2011 showed that denoising auto-encoders with a particular form estimated the score, our results extend this to a very large family of estimators (including the non-parametric case). The section also shows how to interpret Vincent's results so as to show that any auto-encoder whose reconstruction function is a derivative of an energy function can be shown to estimate a score. Instead, the rest of our paper shows that we achieve an estimator of the score even without that strong constraint on the form of the auto-encoder. > I would suggest deriving (13) in the appendix directly from (11) without having the reader recall or read about Euler-Lagrange equations We must admit to not having understood the hints that you have given us. If indeed there was such a way to, as you say, spare the reader the headaches of Euler-Lagrange, we agree that it would be an interesting approach. > You don't actually derive formulas the second moments in the appendix like you do for the first moment, you mean they can similarly be derived? Yes, an asymptotic expansion can be derived in a similar way for the second moment. That derivation is 2 to 3 times longer and is not very useful in the context of this paper. Please also have a look at a new short section (now identified as 3.2.5) that we just added in.
-4IA4WgNAy4Wx
What Regularized Auto-Encoders Learn from the Data Generating Distribution
[ "Guillaume Alain", "Yoshua Bengio" ]
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous intuitive observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input), along with the second derivative of the density and the local mean associated with the unknown data-generating density. This is the second result linking denoising auto-encoders and score matching, but in way that is different from previous work, and can be applied to the case when the auto-encoder reconstruction function does not necessarily correspond to the derivative of an energy function. The theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood, i.e., one not involving a partition function.
[ "data", "distribution", "density", "learn", "reconstruction function", "derivative", "training criterion", "recent work", "variants", "good job" ]
https://openreview.net/pdf?id=-4IA4WgNAy4Wx
https://openreview.net/forum?id=-4IA4WgNAy4Wx
CC5h3a1ESBCav
review
1,362,321,540,000
-4IA4WgNAy4Wx
[ "everyone" ]
[ "anonymous reviewer 4222" ]
ICLR.cc/2013/conference
2013
title: review of What Regularized Auto-Encoders Learn from the Data Generating Distribution review: This paper shows that we can relate the solution of specific autoencoder to the data generating distribution. Specifically solving for general reconstruction function with regularizer that is the L2 penalty of reconstruction contraction relates the reconstruction function derivative of the data probability log likelihood. This is in the limit of small regularization. The paper also shows that in the limit of small penalty this autoencoder is equivalent to denoising autoencoder with small noise. Section 3.2.3: You get similar attractive behavior using almost any autoencoder with limited capacity. The point of your work is that with the specific form of regularization - square norm of contraction of r - the r(x)-x relates to derivative of log probability (proof seem to require it - it would be interesting to know what can be said about other regularizers). It would be good to compare these plots with other regularizers and show that getting log(p) for contractive one is somehow advantageous. Otherwise this section doesn't support this paper in any way. As authors point out, it would be good to know something not in the limit of penalty going to zero. At least have some numerical experiments, for example in 1d or 2d. Figure 4. - 'Top plots are for one model and bottom plots for another' - what are the two models? It would be good to specify this in the figure, e.g. denosing autoencoders with different initial conditions and parameter settings. Section 3.2.5 is important and should be written a little more clearly. I would suggest deriving (13) in the appendix directly from (11) without having the reader recall or read about Euler-Lagrange equations, and it might actually turn out to be simpler. Differentiating the first term with r(x) gives r(x)-x. For the second term one moves the derivative to the other size using integration by parts (and droping the boundary term) and then just applying it to the product p(x)dr/dx resulting in (13). Minor - twice you say in the appending that the proof is in the appendinx (e.g. after statement of theorem 1) The second last sentence in the abstract is uncomfortable to read. This is probably not important, but can we assume that r given by (11) actually has a taylor expansion in lambda? (probably, but in the spirit of prooving things). You don't actually derive formulas the second moments in the appendix like you do for the first moment, you mean they can similarly be derived?
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
x9pbTj7Nbg9Qs
review
1,361,902,200,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: This is a cool investigation in a direction that I find fascinating, and I only have two remarks about minor points made in the paper. * Regarding the energy-based interpretation (that reconstruction error can be thought of as an energy function associated with an estimated probability function), there was a recent result which surprised me and challenges that view. In http://arxiv.org/abs/1211.4246 (What Regularized Auto-Encoders Learn from the Data Generating Distribution), Guillaume Alain and I found that denoising and contractive auto-encoders (where we penalize the Jacobian of the encoder-decoder function r(x)=decode(encode(x))) estimate the *score* of the data generating function in the vector r(x)-x (I should also mention Vincent 2011 Neural Comp. with a similar earlier result for a particular form of denoising auto-encoder where there is a well-defined energy function). So according to these results, the reconstruction error ||r(x)-x||^2 would be the magnitude of the score (derivative of energy wrt input). This is quite different from the energy itself, and it would suggest that the reconstruction error would be near zero both at a *minimum* of the energy (near training examples) AND at a *maximum* of the energy (e.g. near peaks that separate valleys of the energy). We have actually observed that empirically in toy problems where one can visualize the score in 2D. * Regarding the comparison in section 5.1 with the contractive auto-encoder, I believe that there is a correct but somewhat misleading statement. It says that the contractive penalty costs O(d * d_h) to compute whereas the saturating penalty only costs O(d_h) to compute. This is true, but since computing h in the first place also costs O(d * d_h) the overhead of the contractive penalty is small (it basically doubles the computational cost, which is much less problematic than multiplying it by d as the remark could lead a naive reader to believe).
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
__krPw9SreVyO
review
1,363,749,480,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Ross Goroshin" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their constructive comments. A revised version of the paper has been submitted to arXiv and should be available shortly. In addition to minor corrections and additions throughout the paper, we have added three new subsections: (1) a potential extension of the SATAE framework to include differentiable functions without a zero-gradient region (2) experiments on the CIFAR-10 dataset (3) future work. We have also expanded the introduction to better motivate our approach.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
UNlcNgK7BCN9v
review
1,363,840,020,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Ross Goroshin" ]
ICLR.cc/2013/conference
2013
review: The revised paper is now available on arXiv.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
zOUdY11jd_zJr
review
1,362,593,760,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "anonymous reviewer 5bc2" ]
ICLR.cc/2013/conference
2013
title: review of Saturating Auto-Encoder review: Although this paper proposes an original (yet trivial) approach to regularize auto-encoders, it does not bring sufficient insights as to why saturating the hidden units should yield a better representation. The authors do not elaborate on whether the SATAE is a more general principle than previously proposed regularized auto-encoders(implying saturation as a collateral effect) or just another auto-encoder in an already well crowded space of models (ie:Auto-encoders and their variants). In the last years, many different types of auto-encoders have been proposed and most of them had no or little theory to justify the need for their existence, and despite all the efforts engaged by some to create a viable theoretical framework (geometric or probabilistic) it seems that the effectiveness of auto-encoders in building representations has more to do with a lucky parametrisation or yet another regularization trick. I feel the authors should motivate their approach with some intuitions about why should I saturate my auto-encoders, when I can denoise my input, sparsify my latent variables or do space contraction? It's worrisome that most of the research done for auto-encoders has mostly focused in coming up with the right regularization/parametrisation that would yield the best 'filters'. Following this path will ultimately make the majority of people reluctant to use auto-encoders because of their wide variety and little knowledge about when to use what. The auto-encoder community should backtrack and clear the intuitive/theoretical noise left behind, rather than racing for the next new model.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
NNd3mgfs39NaH
review
1,362,361,200,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "anonymous reviewer 5955" ]
ICLR.cc/2013/conference
2013
title: review of Saturating Auto-Encoder review: This paper proposes a novel kind of penalty for regularizing autoencoder training, that encourages activations to move towards flat (saturated) regions of the unit's activation function. It is related to sparse autoencoders and contractive autoencoders that also happen to encourage saturation. But the proposed approach does so more directly and explicitly, through a 'complementary nonlinerity' that depends on the specific activation function chosen. Pros: + a novel and original regularization principle for autoencoders that relates to earlier approaches, but is, from a certain perspective, more general (at least for a specific subclass of activation functions). + paper yields significant insight into the mechanism at work in such regularized autoencoders also clearly relating it to sparsity and contractive penalties. + provides a credible path of explanation for the dramatic effect that the choice of different saturating activation functions has on the learned filters, and qualitatively shows it. Cons: - Proposed regularization principle, as currently defined, only seems to make sense for activation functions that are piecewise linear and have some perfectly flat regions (e.g. a sigmoid activation would yield no penalty!) This should be discussed. - There is no quantitative measure of the usefulness of the representation learned with this principle. The usual comparison of classification or denoising performance based on the learned features, with those obtained with other autoencoder regularization principles would be a most welcome addition.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
MAHULigTUZMSF
comment
1,363,043,520,000
pn6HDOWYfCDYA
[ "everyone" ]
[ "Sixin Zhang" ]
ICLR.cc/2013/conference
2013
reply: 'complementary nonlinerity' is very interesting, it makes me think of wavelet, transforming autoencoder. one question i was asking is how to make use of the information that's 'thrown' away (say after applying the nonlinearity, or the low path filter), or maybe those information are just noise? in saturating AE, the complementary nonlinerity is the residue of the projection (formula 1). What's that projective space? why the projection is defined elementwise (cf. softmax -> simplex)? how general can the non-linearity be extended for general signal representation (say Scattering Convolution Networks) , and classfication. I am just curious ~
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
pn6HDOWYfCDYA
review
1,362,779,100,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "Rostislav Goroshin, Yann LeCun" ]
ICLR.cc/2013/conference
2013
review: In response to 5bc2: the principle behind SATAE is a unification of the principles behind sparse autoencoders (and sparse coding in general) and contracting autoencoders. Basically, the main question with unsupervised learning is how to learn a contrast function (energy function in the energy-based framework, negative log likelihood in the probabilistic framework) that takes low values on the data manifold (or near it) and higher values everywhere else. It's easy to make the energy low near data points. The hard part is making it higher everywhere else. There are basically 5 major classes of methods to do so: 1. bound the volume of stuff that can have low energy (e.g. normalized probabilistic models, K-means, PCA); 2 use a regularizer so that the volume of stuff that has low energy is as small as possible (sparse coding, contracting AE, saturating AE); 3. explicitly push up on the energy of selected points, preferably outside the data manifold, often nearby (MC and MCMC methods, contrastive divergence); 4. build local minima of the energy around data points by making the gradient small and the hessian large (score matching); 5. learn the vector field of gradient of the energy (instead of the energy itself) so that it points away from the data manifold (denoising autoencoder). SATAE, just like contracting AE and sparse modeling falls in category 2. Basically, if you auto-encoding function is G(X,W), X being the input, and W the trainable parameters, and if your unregularized energy function is E(X,W) = ||X - G(X,W)||^2, if G is constant when X varies along a particular direction, then the energy will grow quadratically along that direction (technically, G doesn't need to be constant, but merely to have a gradient smaller than one). The more directions G(X,W) has low gradient, the lower the volume of stuff with low energy. One advantage of SATAE is its extreme simplicity. You could see it as a version of Contracting AE cut down to its bare bones. We can always obfuscate this simple principle with complicated math, but how would that help? At some point it will become necessary to make more precise theoretical statements, but for now we are merely searching for basic principles.
yGgjGkkbeFSbt
Saturating Auto-Encoder
[ "Ross Goroshin", "Yann LeCun" ]
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.
[ "satae", "simple new regularizer", "activation functions", "least", "region", "regularizer", "activations", "saturated region", "corresponding activation function", "saturation regularizer" ]
https://openreview.net/pdf?id=yGgjGkkbeFSbt
https://openreview.net/forum?id=yGgjGkkbeFSbt
BSYbBsx9_5Suw
review
1,361,946,900,000
yGgjGkkbeFSbt
[ "everyone" ]
[ "anonymous reviewer 3942" ]
ICLR.cc/2013/conference
2013
title: review of Saturating Auto-Encoder review: This paper proposes a regularizer for auto-encoders with nonlinearities that have a zegion with zero-gradient. The paper mentions three nonlinearities that fit into that category: shrinkage, saturated linear, rectified linear. The regularizer basically penalizes how much the activation deviates from saturation. The insight is that at saturation, the unit conveys less information compared to when it is in a non-saturated region. While I generally like the paper, I think it could be made a lot stronger by having more experimental results showing the practical benefits of the nonlinearities and their associated regularizers. I am particularly interested in the case of saturated linear function. It will be interesting to compare the results of the proposed regularizer and the sparsity penalty. More concretely, f(x) = 1 would incur some loss under the conventional sparsity; whereas, the new regularizer does not. From the energy conservation point of view, it is not appealing to maintain the neuron at high activation, and the new regularizer does not capture that. But it may be the case that, for a network to generalize, we need to only restrict the neurons to be in the saturation regions. Any numerical comparisons on some classification benchmarks would be helpful. It would also be interesting that the method is tested on a classification dataset to see if it makes a different to use the new regularizers.
zzM42D6twOztS
Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
[ "Mathias Berglund", "Tapani Raiko" ]
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different variances for stochastic gradient estimates of individual samples. In this paper we show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the sum of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
[ "contrastive divergence", "pcd", "persistent contrastive divergence", "popular methods", "weights", "restricted boltzmann machines" ]
https://openreview.net/pdf?id=zzM42D6twOztS
https://openreview.net/forum?id=zzM42D6twOztS
FFW7YqOZd2FC0
review
1,392,658,680,000
zzM42D6twOztS
[ "everyone" ]
[ "Mathias Berglund" ]
ICLR.cc/2014/workshop
2014
review: The revised version of the paper has now been published. Thank you for all the helpful comments. As an additional comment, please note that we are not measuring the variance of the average of the estimates obtained with M independent chains (i.e. we use a minibatch size of 1), since the variance of estimates obtained with averaging (i.e. using a minibatch size of M>1) is easy to compute from the case of a minibatch size of 1, given that the different estimates are independently sampled.
zzM42D6twOztS
Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
[ "Mathias Berglund", "Tapani Raiko" ]
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different variances for stochastic gradient estimates of individual samples. In this paper we show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the sum of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
[ "contrastive divergence", "pcd", "persistent contrastive divergence", "popular methods", "weights", "restricted boltzmann machines" ]
https://openreview.net/pdf?id=zzM42D6twOztS
https://openreview.net/forum?id=zzM42D6twOztS
adiPdjpKvR56T
review
1,391,848,860,000
zzM42D6twOztS
[ "everyone" ]
[ "anonymous reviewer 9c34" ]
ICLR.cc/2014/workshop
2014
title: review of Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence review: This paper presents an empirical study of the variance in gradient estimates between contrastive divergence (CD) and persistent contrastive divergence (PCD). It is well known that PCD tends to be less stable than CD, requiring a larger learning rate and larger mini-batch sizes. The paper does a fairly good job of empirically verifying this phenomenon on several image datasets, and most of the results are consistent with expectations. The observation that the variance increases toward the end of learning is an interesting and not entirely obvious finding. One issue though is that the paper seems to miss a crucial part of the story: CD learning enjoys a low variance at the cost of an increase in bias. It is easy to construct a gradient estimate that exhibits zero variance, however practically speaking this would not be very useful. What is more interesting is the trade-off between bias and variance. For example, PCD exhibits significant variance on the silhouettes dataset. Does this mean that it requires an impractically small learning rate? It has been shown in the past that the technique of iterate averaging can be used to remove much of the variance in PCD learning, but that it does not work nearly as well when applied to CD [1]. The fact that PCD is asymptotically unbiased, but exhibits high variance compared to CD supports these results. [2] should be cited for PCD as well. References: [1] Kevin Swersky, Bo Chen, Benjamin Marlin, and Nando de Freitas, “A Tutorial on Stochastic Approximation Algorithms for Training Restricted Boltzmann Machines and Deep Belief Nets,” Information Theory and Applications Workshop, 2010. [2] Laurent Younes, “Parametric inference for imperfectly observed Gibbsian fields,” Probability Theory and Related Fields, vol. 82, no. 4, pp. 625–645, 1989.
zzM42D6twOztS
Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
[ "Mathias Berglund", "Tapani Raiko" ]
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different variances for stochastic gradient estimates of individual samples. In this paper we show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the sum of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
[ "contrastive divergence", "pcd", "persistent contrastive divergence", "popular methods", "weights", "restricted boltzmann machines" ]
https://openreview.net/pdf?id=zzM42D6twOztS
https://openreview.net/forum?id=zzM42D6twOztS
zzq5dAvF5ndg4
review
1,392,068,280,000
zzM42D6twOztS
[ "everyone" ]
[ "Mathias Berglund" ]
ICLR.cc/2014/workshop
2014
review: Reply to both reviewers: Thank you for the extensive and helpful comments. As the bias of CD is quite well documented while the variance of PCD vs. CD is less so, the paper intentionally does not focus on the bias. We should however make this more clear in the introduction. Although it would be interesting to study the bias / variance trade-off in all the training settings in the paper, we still saw that there was value in documenting the variance also stand-alone in order to give sense of the magnitude of the differences in variance between PCD and CD. We would therefore still argue that documenting only the variance has value, although we agree that it would be meaningful to explore the bias/variance trade-off in a more extended discussion on the topic. We will submit a revised version of the paper based on the comments as soon as possible. Reply to Anonymous 9c34: Thank you for the references, we should mention iterate averaging as a method to alleviate the high variance for PCD. Thank you also for the second PCD reference. Reply to Anonymous 11c9: Thank you for the comments. We fear that the reviewer 'Anonymous 11c9' is dubious about the experimental setting in Figure 2 due to a misunderstanding, but we hope to clear things up in our response (see below), and we hope to make that part much clearer in the next revision. Regarding the request for clarification, the Figures 1 and 2 show quite different results, which we realize can be misleading and gives an impression of a large asymmetry between CD and PCD. In Figure 1, the x-axis depicts the number of CD steps for the *same* data point, where only the number of steps in the CD sampling increases. Therefore, following one of the lines in Figure 1 along the x-axis, we are comparing the same update starting from the same data point, but with a longer and longer chain for the negative phase. The figure is hence what would be expected from a typical figure comparing different values of k for CD-k. However, in Figure 2, we are looking at the variance of the *mean* (or sum, see below) of subsequent estimates. Differing from Figure 1, in Figure 2 we are summing up subsequent estimates along the x-axis, which means that the further we go along the x-axis, the more estimates we have included. The reason for doing so is that the high variance of PCD is hypothesized to stem from subsequent negative phase estimates to be dependent. Please also note that in Figure 2, the PCD variance is divided by the variance of the sum of k estimates using “exact” sampling – which means that the figure is identical to taking the mean of subsequent gradient estimates and compare them to the mean of estimates with “exact” sampling. Therefore, for a chain that mixes well, this relative variance should not increase with summing more steps during training. This we can also see in Figure 2, where the variance for MNIST and CIFAR in the beginning of the training are very close to “exact” sampling when summing 1-20 subsequent steps (the horizontal lines in Figure 2). However, we realize that the text would be clearer if we used the word mean instead of sum, and we will revise the text accordingly. The pseudocode for Figure 2 is presented below. Regarding the baseline, we ran M >> 1 independent chains for 1000 steps, i.e. exactly aimed at running a large number of independent chains until convergence. Although we have not tried to validate whether 1000 steps is enough, we have simply assumed that 1000 steps is enough for approximate convergence. Regarding evaluating PCD variance on a model trained via CD we agree that the most reliable results would be obtained if we trained the model with e.g. enhanced gradient and parallel tempering instead of CD. Regarding the I-CD experiments, the 10 gradient estimates were run by initializing the Markov chain from a random training sample (which was different in all the 10 runs), but also different from the training sample used for the positive phase. Although we agree that the result of higher I-CD variance compared to CD is trivial, we still found the magnitude of variance of I-CD relevant to display. If for instance the variance of I-CD was very similar to that of CD (but much less than the “exact” estimate), the low variance of CD could be explained by the fact that we run the chain very few steps from *any* data point. However, we agree that the text should be changed to state this more clearly. Thank you also for the clarity comments, as you assumed, they were both indeed errors in the text. Pseudocode for Figure 2: do for each data point in data set { use the data point for positive phase run negative particle sampling for 1000 steps from random data point initialize gradient_sum to zero do 20 times { calculate gradient estimate using current positive and negative particle add gradient estimate to gradient_sum store sufficient statistics of the gradient_sum pick new random data point for positive phase run the negative particle chain one step forward (independent of positive phase) } } do for each data point in data set { use the data point for positive phase run negative particle sampling for 1000 steps from random data point initialize gradient_sum_exact to zero do 20 times { calculate gradient estimate using current positive and negative particle add gradient estimate to gradient_sum_exact store sufficient statistics of the gradient_sum_exact pick new random data point for positive phase run negative particle sampling for 1000 steps from random data point } } compute the sum of componentwise variances from the statistics of the gradient_sum for each of the 20 steps separately compute the sum of componentwise variances from the statistics of the gradient_sum_exact for each of the 20 steps separately divide the first sum above with the second sum above for each of the 20 steps separately
zzM42D6twOztS
Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
[ "Mathias Berglund", "Tapani Raiko" ]
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different variances for stochastic gradient estimates of individual samples. In this paper we show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the sum of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
[ "contrastive divergence", "pcd", "persistent contrastive divergence", "popular methods", "weights", "restricted boltzmann machines" ]
https://openreview.net/pdf?id=zzM42D6twOztS
https://openreview.net/forum?id=zzM42D6twOztS
FsLVFk86XIY5D
review
1,391,972,400,000
zzM42D6twOztS
[ "everyone" ]
[ "anonymous reviewer 11c9" ]
ICLR.cc/2014/workshop
2014
title: review of Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence review: This paper provides an empirical evaluation of the variance of maximum likelihood gradient estimators for RBMs, comparing Contrastive Divergence to Persistent CD. The results confirm a well known belief that PCD suffers from higher-variance, than the biased CD gradient. While the result may not be surprising, I believe the authors are correct in stating that the issue had not properly been investigated in the literature. It is unfortunate however that the authors avoid the much more important question of trade-off between bias and variance. Before making a final judgement on the paper however, I would ask that the authors clarify the following potential major issue. Other more general feedback for improving the paper follows. Request for Clarification: Why the asymmetry between the estimation of CD-k vs PCD-k gradients ? The use of PCD-k is highly unusual. If the goal was to study variance as a function of the ergodicity of a single Markov chain, then PCD-k gradients should have been computed with a single training example (for the positive phase) and computing the negative phase gradient by *averaging* over the k-steps of the negative phase chain. Could the authors clarify (through pseudocode) how the gradients and their variance are computed for the experiments of Figure 2? Due to the loss of ergodicity of the Markov chain, the effective number of samples used to estimate the model expectation should indeed be larger at 10 epochs, than at 500 epochs. It is thus predictable that variance of the gradient estimates would increase during training. However, this is for a fixed value of k. I find very strange that the variance would *increase* with k (at a fixed point of training). I am left wondering if this is an artefact of the experimental protocol: the authors seem to be computing the variance of the *sum* of k-gradient estimates. This quantity will indeed grow with k, and will do so linearly if the estimates at each k are assumed to be independent. The linearity of the curves in Fig.2 gives some weight to this hypothesis. Other general feedback: * One area of concern is that the paper evaluates PCD in a regime which is not commonly used in practice: i.e. estimating the negative phase expectation via the correlated samples of a single Markov chain. I worry that some readers may conclude that PCD is not viable, due to its excessively large variance. For this reason, I think the paper would benefit from repeating the experiments but averaging over M independent chains. * A perhaps more appropriate baseline, would be to run M >> 1 independent Markov chains to convergence and average the resulting gradient estimates. This might not change much, but the above would yield a better estimate of the ML gradient than CD-1000. * Evaluating the variance of PCD gradients on a model trained via CD may be problematic. The mixing issues of PCD can be exacerbated when run on a CD-trained model, where the energy has only been fit locally around training data (Desjardins, 2010). While I do not expect the conclusions to change, I would be interested in seeing the same results on a PCD-k trained model. * RE: I-CD experiments. 'This supports the hypothesis that the low variance of CD [stems from] the negative particle [being] sampled from the positive particle, and not from that the negative particle is sampled only a limited number of steps from an arbitrary data point'. I am not sure that the experiment allows you to draw this conclusion. When computing the 10 gradient estimates (for each training example) did you initialize the Markov chain from a random (but fixed throughout the 10 gradient evaluations) training example ? Otherwise, I believe the conclusion is rather uninteresting and doesn't shed light on the 'importance' of initializing the negative chain from the positive phase training data. In CD-training, the only variance stems from the trajectory taken by the (short) Markov chain from a fixed starting point. In I-CD, there are two sources of variance: (1) the trajectory of the chain, and (2) the starting point of the chain. If the chain is initialized randomly for the 10 gradient evaluations, then this will undoubtedly increase the variance of the estimator (but with lower bias). Clarity: * In I-CD, the 'negative particle is sampled from a random positive particle' ? I would make explicit that you initialize the chain of I-CD from a random training example. In Section 4, 'arbitrary data point' left me wondering if you were instead initializing the chain from an independent pseudo-sample of the model (using i.e. a uniform distribution or a factorial approximation to p(v)). * 'Conversely, the variance of the mean of subsequent variance estimates using PCD is significantly higher' ? Did the authors mean 'the variance of the mean of subsequent gradient estimates' ? Otherwise, please consider rephrasing.
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
--5uYip1KdY1B
review
1,391,843,280,000
eOP7egJ1wveRW
[ "everyone" ]
[ "anonymous reviewer 3af9" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Factored Representations in a Deep Mixture of Experts review: The paper introduce a deep mixture of experts model which contains multiple layers each of them contains multiple experts and a gating network. The idea is nice and the presentation is clear but the experiments lack proper, needed, comparisons with baseline systems for the Jittered MNIST and the monophone speech datasets. As the authors mentioned in conclusion, the experiments use all experts for all data points which doesn’t achieve the main purpose of the papers, i.e. faster training and testing. It is important to show how does this system perform against a deep NN baseline with the same number of parameters in terms of accuracy and training time per epoch. Regarding the speech task. What is the error you are presenting in Table 2, is it the Phone or Frame error rate?
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
OOxLKAd6LBO_C
review
1,391,636,100,000
eOP7egJ1wveRW
[ "everyone" ]
[ "Liangliang Cao" ]
ICLR.cc/2014/workshop
2014
review: I am interested in the topic of this paper but my impression after reading is still that deep MOE is hard to train and we need to know a number of tricks including constrained training and fine tuning. I would expect it would be harder to train deeper models (say, 4 or 5 layers) Several suggestions: - About experimental comparison. If I understand correctly, Table 1 and 2 only compare performances from several configurations of 2-layer MOE. I would be interesting to see how much better compared with basic MOE (1-layer). - About Jordan and Jacob's HMOE. Section 2 reviews the differences between DMOE and HMOE. Which model is more scalable? I am curious about the comparison with HMOE on both accuracy and speed. - Training + testing accuracy. I like that the current submission conveys more information by reporting the performance of training and testing. However, it will be even more interesting to report the curves of two errors during SGD training. Also I am a little confused: are you using a validation set with SGD? How is the performance on validation set during training?
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
ccU6RwPFLaROG
comment
1,392,878,160,000
--5uYip1KdY1B
[ "everyone" ]
[ "David Eigen" ]
ICLR.cc/2014/workshop
2014
reply: Thank you for your comments and suggestions. We now include DNN baselines for Jittered MNIST. In response to your question re: which error rate, it is the phone error. This has been updated in the new version as well.
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
3fVm9U8jmI9ZW
review
1,389,855,180,000
eOP7egJ1wveRW
[ "everyone" ]
[ "anonymous reviewer 4f75" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Factored Representations in a Deep Mixture of Experts review: This paper extends the mixture-of-experts (MoE) model by stacking several blocks of the MoEs to form a deep MoE. In this model, each mixture weight is implemented with a gating network. The mixtures at each block is different. The whole deep MoE is trained jointly using the stochastic gradient descent algorithm. The motivation of the work is to reduce the decoding time by exploiting the structure imposed in the MoE model. The model was evaluated on the MNIST and speech monophone classification tasks. The idea of deep MoE is interesting and, although not difficult to come out, is novel. I found the fact that the first and second blocks focus on distinguishing different patterns is particularly interesting. However, I feel that the effectiveness and the benefit of the model is not supported by the evidence presented in the paper. 1. It’s not clear how or whether the proposed deep MoE can beat the fully connected normal DNNs if the same number of the model parameters are used (or even when deep MoEs use more parameters). A comparison against the fully connected DNN on the two tasks is needed. In many cases we don’t want to sacrifice accuracy for small speed improvement. 2. It’s not clear whether the claimed computation reduction is true. It would be desirable if a comparison on the computation cost between the deep MoE and the fully connected conventional DNN is provided when both the number of classes is small (say 10) and large (say 1K-10K). The comparison should also consider the fact that the sparseness pattern in the deep MoE is random and unknown beforehand may not save computation at all when SIMD instructions are used. 3. It is also unclear whether deep MoE performs better than the single-block MoE. It appears to me, according to the results presented, the deep MoE actually performs worse. The concatenation trick improved the result on the MNIST. However, from my experience, the gain is more likely from the concatenation of the hidden features instead of the deep architecture used. There is also a minor presentation issue. The models on row 2 and 3 are identical in Table 2 but the results are different. What is the difference between these two models?
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
__vZdXgmZXdMz
comment
1,392,877,860,000
3fVm9U8jmI9ZW
[ "everyone" ]
[ "David Eigen" ]
ICLR.cc/2014/workshop
2014
reply: Thank you for your review. Responding to your various points: 'A comparison against the fully connected DNN on the two tasks is needed' For Jittered MNIST, we ran these baselines and are including the results. For Monophone Speech, there are unfortunately some IP issues that prevent us from running this now -- however, there are still the second layer single-expert and concatenated-experts baselines. 'It’s not clear whether the claimed computation reduction is true' In this work, we use the all-experts mixture and have no computational reductions yet. We feel the fact that the model factorizes is a promising result in this direction, however. This was explained in the discussion, but we will be more explicit about this in the introduction as well. 'The concatenation trick improved the result on the MNIST.' This concatenation was actually intended as a baseline target that the mixture should not be able to beat, since it concatenates the experts' outputs instead of superimposing them (this also increases the number of parameters in the final softmax layer). We demonstrate that the DMoE falls in between this and the single-expert baseline -- it is best to be as close as possible to the concatenated experts bound. This is explained at the bottom of page 3. 'The models on row 2 and 3 are identical in Table 2 but the results are different' Thanks for pointing this out; these used two different sized gating networks at the second mixture layer (50 and 20 hiddens). We now include all the gating network sizes in these tables.
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
T29y23Xay3UVQ
review
1,391,016,120,000
eOP7egJ1wveRW
[ "everyone" ]
[ "anonymous reviewer c87d" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Factored Representations in a Deep Mixture of Experts review: The paper extends the concept of mixtures of experts to multiple layers of experts. Well, at least in theory - in practise authors stopped their experiments at only two such layers - which somehow invalidates the use of buzzy 'deep' word in the title - does more (than two) layers of mixtures still help? The clue idea is to collaboratively optimise different sub-networks representing either experts or gating networks. Authors propose also 'the trick' to effectively learn mixing networks by preserving too rapid selection of dominant experts at the beginning of training stage. It's hard to deduce whether presented idea gives a real advantage over, for example, usual -- one or two hidden layers feed-forward networks with the same total number of parameters. Perhaps, I am also missing something important here -- but was there any good reason for using the Jittered MNIST instead of the MNIST itself? In the end both are just toy benchmarks while the latter gives you the ability to cite and compare your work to other many other reported results. If you did that, not doing some basic baselines by yourself would be OK. I've got similar comments to the monophone voice classification. On the top I've already written for MNIST I do not see the need to use simplified proprietary database. It would be better to do the experiments in TIMIT benchmark and then cite other works that reports frame accuracy (where a single frame is a monophone) so the reader could get a bit wider picture of how your work fits into broader perspective. Anyway, idea is sufficiently novel and interesting and I am in favour of accept. Perhaps the authors could at least improve MNIST experimental aspect.
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
xxuVPAmBVc4BE
comment
1,392,877,980,000
T29y23Xay3UVQ
[ "everyone" ]
[ "David Eigen" ]
ICLR.cc/2014/workshop
2014
reply: Thank you for your comments. In response to your questions: 'does more (than two) layers of mixtures still help' We did not try more than two layers yet. 'reason for using the Jittered MNIST instead of the MNIST itself' Jittering places digits at different spatial locations, which the first layer learns to factor out. By jittering the dataset ourselves, we can explicitly measure this effect, as shown in Fig 2.
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
ccXQi_g3QhiMR
comment
1,392,879,180,000
OOxLKAd6LBO_C
[ "everyone" ]
[ "David Eigen" ]
ICLR.cc/2014/workshop
2014
reply: Thanks for your comments. Tables 1 and 2 have comparisons for 1-layer MoE on the last lines of each table. Re: scalability: We are currently using an all-experts mixture in this work, so haven't realized any computational gains yet. However, the fact that the DMoE factorizes is an interesting result that we think is a promising step towards partitioning these networks efficiently. For training curves, the paper is somewhat packed as it is, and already twice the recommended length for workshop submissions, so it seems infeasible to include these. The reported results are on the final train/test split using fixed numbers of epochs validated beforehand.
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
CdfIWqqXIkdT-
comment
1,392,063,900,000
vvxMv-uZQWDNr
[ "everyone" ]
[ "Ashutosh Modi" ]
ICLR.cc/2014/workshop
2014
reply: -- Why call it 'unlabeled'?: We used unlabeled in the sense that we used texts without any kind of semantic annotation on top of them. But we agree that this is confusing, especially given that the texts were written by Amazon turkers specifically for this task. We will edit the paper accordingly. -- Representing more complex sentences?: We will clarify this point in the paper. The example 'fill water in coffee maker' contains 2 phrases as arguments ( 'water' and 'in coffee maker'), we use their 'lexical' heads (i.e. 'water' and 'maker') only. The embeddings of these two words and of the predicate ('fill') are then used as the input to the hidden layer (see Figure 2). The same procedure is used for predicates with more than 2 arguments (just more arguments are used as inputs to the hidden layer). In other words, we use a bag-of-arguments model. -- Too much related work: Given that there have been much work on this or related task in NLP, we believe that we should explain how our approach (and the general representation learning framework) is different. We would prefer not to shorten this section.
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
vvxMv-uZQWDNr
review
1,391,516,760,000
ssDPnHvkedao6
[ "everyone" ]
[ "anonymous reviewer b099" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Semantic Script Knowledge with Event Embeddings review: The authors propose a model that takes a set of events (written as English text) as input, and outputs the temporal ordering of those events. As opposed to a previous DAG based method (also used as a baseline here), in this work words are represented as vectors (initialized with Collobert's SENNA embeddings) and are input into a two layer neural net whose output is also a vector embedding. The output is then taken as input to an online ranking model (PRank) and the whole thing (including the word vectors) are trained using backprop. A dataset containing short sequences of events (e.g. the process of making coffee) gathered for previous work using MTurk is used for train and test. The proposed embedding method shows a substantial improvement over the DAG baseline. This is interesting work, I thought the execution was good, and the results are impressive. I only have a few suggestions/questions: first, why, in the abstract and elsewhere, do you claim to be using unlabeled data? The data is labeled by order of events (by MTurkers), is it not? I suspect that you mean that no further labeling was done, but this is confusing. Second, your model (Fig. 1) shows one predicate (i.e. verb) and two arguments (i.e. nouns), but some of the examples from the ESD data are more complex (e.g. 'fill water in coffee maker'). How are these more complex phrases mapped to your model? Finally, you use a lot of space on previous work; I think that the paper would be improved by adding more details on your method, and shortening the previous work sections (1 and 2.1) by better focusing them. A minor issue: at the end of Section 1, some unnecessary extra space has been inserted.
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
IhlvUSBbaVUbQ
review
1,391,787,960,000
ssDPnHvkedao6
[ "everyone" ]
[ "anonymous reviewer 60ec" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Semantic Script Knowledge with Event Embeddings review: This paper investigates a model which aims at predicting the order of events; each event is an english sentence. While previous methods relied on a graph representation to infer the right order, the proposed model is made of two stages. The first stage use a continuous representation of a verb frame, where the predicate and its arguments are represented by their word embeddings. A neural network is used to derive this continuous representation in order to capture the compositionality within the verb frame. The second stage uses a large margin extension of PRank. The learning scheme is very interesting: the error made by the ranker is used to update the ranker parameters, but is also back-propagated to update the NN parameters. This paper is well written and describes a nice idea to solve a difficult problem. The experimental setup is convincing (including the description of the task and how the learning resources were built). I only have a few suggestions/questions. For a conference that is focused on representation learning, it could be interesting to discuss whether the word embeddings provided by SENNA need to be updated. For instance, the authors could compare their performances to a system where the initial word embeddings are fixed. Moreover, the evaluation metric is F1, but how the objective function is related to this metric. Maybe a footnote could say a few words about that and I'm curious to see how the objective function evolves during training. The ranking error function is quite similar to metrics used in MT for reordering evaluation (see for instance the work of Alexandra Birch in 2009).
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
OV0_ZIkXpHOXu
review
1,392,734,280,000
ssDPnHvkedao6
[ "everyone" ]
[ "Ashutosh Modi" ]
ICLR.cc/2014/workshop
2014
review: We have submitted a new version of the paper. We made the changes we promised above.
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
UiBqvVVsPEv5t
comment
1,392,063,900,000
IhlvUSBbaVUbQ
[ "everyone" ]
[ "Ashutosh Modi" ]
ICLR.cc/2014/workshop
2014
reply: -- Keeping embeddings fixed to the ones produced by SENNA: We have just tried doing this, and obtained about the same results (slightly better: 84.3 average F1 vs. 84.1 F1 in the paper). In theory, keeping them fixed may not be a good idea, as learning in the (mostly) language modeling context (as in SENNA) tends to assign similar representations to antonyms/opposites (e.g., open and close). And the opposites tend to appear at different positions in event sequences. However, the fact the results are similar may suggest that our dataset is not large enough to learn meaningful refinements. Perhaps, using SENNA embeddings to define an informative prior on the representation would be a better idea but we will leave this for future work. We will add a footnote mentioning the above experiment. -- F1 vs. accuracy: The binary classification problem is fairly balanced, so we would not expect much of a difference between accuracy (which we essentially optimize) and F1; we chose to use the same metric in evaluation as considered in the previous work. -- We are not familiar with the reordering metric used in Birch et al. (2009), thanks for the pointer.
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
27T0Aaudf37di
review
1,392,069,840,000
ssDPnHvkedao6
[ "everyone" ]
[ "Ashutosh Modi" ]
ICLR.cc/2014/workshop
2014
review: We thank both reviewers for the comments. See our feedback above. We will upload the revised version by the end of the week.
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
_9Q-_PbJ6d95N
review
1,390,323,900,000
ssDPnHvkedao6
[ "everyone" ]
[ "Ashutosh Modi" ]
ICLR.cc/2014/workshop
2014
review: Corrected a typo in the paper and updated version is available at : http://arxiv.org/abs/1312.5198v2
6dukdvBcxn6cR
Learning States Representations in POMDP
[ "Gabriella Contardo", "Ludovic Denoyer", "Thierry Artieres", "patrick gallinari" ]
We propose to deal with sequential processes where only partial observations are available by learning a latent representation space on which policies may be accurately learned.
[ "states representations", "pomdp", "sequential processes", "partial observations", "available", "latent representation space", "policies" ]
https://openreview.net/pdf?id=6dukdvBcxn6cR
https://openreview.net/forum?id=6dukdvBcxn6cR
GGBm_ztp7nyT5
review
1,391,443,260,000
6dukdvBcxn6cR
[ "everyone" ]
[ "anonymous reviewer 2349" ]
ICLR.cc/2014/workshop
2014
title: review of Learning States Representations in POMDP review: Learning States Representations in POMDP Gabriella Contardo, Ludovic Denoyer, Thierry Artieres, Patrick Gallinari Summary: The authors present a model that learns representations of sequential inputs on random trajectories through the state space, then feed those into a reinforcement learner, to deal with partially observable environments. They apply this to a POMDP mountain car problem, where the velocity of the car is not visible but has to be inferred from successive observations. Comments: Previous work has solved more difficult versions of the POMDP mountain car problem, where the input was raw vision as opposed to the very low-dimensional state space of the authors. Please discuss in the context of the present approach: G. Cuccu, M. Luciw, J. Schmidhuber, F. Gomez. Intrinsically Motivated Evolutionary Search for Vision-Based Reinforcement Learning. In Proc. Joint IEEE International Conference on Development and Learning (ICDL) and on Epigenetic Robotics (ICDL-EpiRob 2011), Frankfurt, 2011. From the abstract: 'The method is successfully demonstrated on a vision-based version of the well-known mountain car benchmark, where controllers receive only single high-dimensional visual images of the environment, from a third-person perspective, instead of the standard two-dimensional state vector which includes information about velocity.' Sec 4: 'For example (Gisslen et al., 2011) proposed to learn reprentations with an auto-associative model with a fi xed-size history' This is not accurate - the representation of (Gisslen et al., 2011) had fixed size, but in principle the history could have arbitrary depth, because they used a RAAM like Pollack's (NIPS 1989) as unsupervised sequence compressor: J. B. Pollack. Implications of Recursive Distributed Representations. Advances in Neural Information Processing Systems I, NIPS, 527-536, 1989. Of course, RNN for POMPD RL have been around since 1990 - please discuss differences to the approach of the authors: J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Proc. IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 2, pages 253-258, 1990. One should probably also discuss recent results with huge RNN for vision-based POMDP RL: J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, 2013. General recommendation: It is not quite clear to this reviewer how this work goes beyond the previous work mentioned above. At the very least, the authors should make the differences very clear.
6dukdvBcxn6cR
Learning States Representations in POMDP
[ "Gabriella Contardo", "Ludovic Denoyer", "Thierry Artieres", "patrick gallinari" ]
We propose to deal with sequential processes where only partial observations are available by learning a latent representation space on which policies may be accurately learned.
[ "states representations", "pomdp", "sequential processes", "partial observations", "available", "latent representation space", "policies" ]
https://openreview.net/pdf?id=6dukdvBcxn6cR
https://openreview.net/forum?id=6dukdvBcxn6cR
xheJhouLQlYLp
review
1,391,638,440,000
6dukdvBcxn6cR
[ "everyone" ]
[ "Ludovic Denoyer" ]
ICLR.cc/2014/workshop
2014
review: We first thank the reviewer for the comment. Concerning the fact that most difficult versions of mountain car have been already solved, we perfectly agree. In our paper, we are using a very simple version of mountain car in order to demonstrate the ability of our approach to extract hidden information from observations and from the dynamicity of the system. More difficult versions of mountain car and of other complex reinforcement learning tasks are under investigation, and we plan to present these experiments in a full paper in the next months. Since the paper size was restricted to 3 pages in the call for papers, we have focused on the relation of our model with the closest/lastest models of the literature and thus we agree that this submission lacks some important references. We will submit quickly a longer version of the paper discussing the differences between our approach and other existing methods that are not described in the current version. The three papers cited by the reviewer propose to tackle a control problem where the reward function is known: the recurrent neural networks are used as controllers for the task to solve, and thus are able to extract an hidden representation that depends on the task. Our approach is unsupervised and learn representations using randomly chosen trajectories without using the reward function. On this regard, our work is closer to the approach of (Gisslen et al., 2011) and (Duell et al., 2012) that are also based on unsupervised learning. In comparison to these last approaches, the originality is to propose a transductive model that directly learns the model of the world in the representation space, allowing us to compute simulations of the future of the system even if no information is observed (we note that (Schmidhuber,1990) could be adaptated to do so in RL applications). This also allows different ways to infer representations on new observations. We'd like to stress out that the representations learned with our model could also be used for different tasks than RL. == 'This is not accurate - the representation of (Gissl en et al., 2011) had fixed size, but in principle the history could have arbitrary depth, because they used a RAAM like Pollack's (NIPS 1989) as unsupervised sequence compressor: J. B. Pollack. Implications of Recursive Distributed Representations. Advances in Neural Information Processing Systems I, NIPS, 527-536, 1989.' Yes, thank you for pointing out the lack of clarity of this sentence, it will be modified in the next version.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
zf2o4C95ov4cF
review
1,392,772,020,000
Fav_FXoOhRFOQ
[ "everyone" ]
[ "Jimmy Ba" ]
ICLR.cc/2014/workshop
2014
review: The reviewer says: “They conclude that current learning algorithms are a better fit for deeper architectures and that shallow models can benefit from improved optimization techniques.” We are not really sure of this, but it is a possibility and we are trying to do the experiments necessary to answer this question. Thanks for pointing us to related work on re-parameterizing the weight matrices. We added these to the extended abstract. What we propose is somewhat different from this prior work. Specifically, we apply weight factorization during training (as opposed to after training) to speed convergence of the mimic model --- the weights of the linear layer and the weights in the non-linear hidden layer are trained at the same time with backprop. The SNN-MIMIC models in Table 1 use 250 linear units in the first layer. We updated the paper to include this information. On page 2, the features are logmel: fourier-based filter banks with 40 coefficients distributed on a mel-scale. We have modified the paper to clarify this. The ECNN on page 2 is an ensemble of multiple CNNs. Both SNN-MIMIC models (8k and 400k) are trained to mimic the ECNN. We mimic an ensemble of CNNs because we don’t have any unlabeled data for TIMIT and thus must use the modest-sized train set for compression. With only 1.1M points available for compression, we observe that the student MIMIC model is usually 2-3% less accurate than the teacher model. We also observe, however, that whenever we make the teacher model more accurate, the student MIMIC model gains a similar amount of accuracy as well (suggesting that the fixed gap between the deep teacher and shallow MIMIC models is due to a lack of unlabeled data, not a limited representational power in the shallow models). Because our goal is to train a shallow model of high accuracy, we needed to use a teacher model of maximum accuracy to help overcome this gap between the teacher and mimic not. If we had a large unlabeled data set for TIMIT this would not be necessary. The ensemble of CNNs is significantly more accurate than a single CNN, but we have not yet published that result. We modified the paper to make all of this clearer.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
o39CmeYuHU3uB
comment
1,392,772,140,000
__fQe8rzQg_zM
[ "everyone" ]
[ "Jimmy Ba" ]
ICLR.cc/2014/workshop
2014
reply: The reviewer says: “They conclude that current learning algorithms are a better fit for deeper architectures and that shallow models can benefit from improved optimization techniques.” We are not really sure of this, but it is a possibility and we are trying to do the experiments necessary to answer this question. Thanks for pointing us to related work on re-parameterizing the weight matrices. We added these to the extended abstract. What we propose is somewhat different from this prior work. Specifically, we apply weight factorization during training (as opposed to after training) to speed convergence of the mimic model --- the weights of the linear layer and the weights in the non-linear hidden layer are trained at the same time with backprop. The SNN-MIMIC models in Table 1 use 250 linear units in the first layer. We updated the paper to include this information. On page 2, the features are logmel: fourier-based filter banks with 40 coefficients distributed on a mel-scale. We have modified the paper to clarify this. The ECNN on page 2 is an ensemble of multiple CNNs. Both SNN-MIMIC models (8k and 400k) are trained to mimic the ECNN. We mimic an ensemble of CNNs because we don’t have any unlabeled data for TIMIT and thus must use the modest-sized train set for compression. With only 1.1M points available for compression, we observe that the student MIMIC model is usually 2-3% less accurate than the teacher model. We also observe, however, that whenever we make the teacher model more accurate, the student MIMIC model gains a similar amount of accuracy as well (suggesting that the fixed gap between the deep teacher and shallow MIMIC models is due to a lack of unlabeled data, not a limited representational power in the shallow models). Because our goal is to train a shallow model of high accuracy, we needed to use a teacher model of maximum accuracy to help overcome this gap between the teacher and mimic not. If we had a large unlabeled data set for TIMIT this would not be necessary. The ensemble of CNNs is significantly more accurate than a single CNN, but we have not yet published that result. We modified the paper to make all of this clearer.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
__fQe8rzQg_zM
review
1,391,654,520,000
Fav_FXoOhRFOQ
[ "everyone" ]
[ "anonymous reviewer d691" ]
ICLR.cc/2014/workshop
2014
title: review of Do Deep Nets Really Need to be Deep? review: The authors show that a shallow neural net trained to mimic a deep net (regular or convolutional) can achieve the same performance as the deeper, more complex models on the TIMIT speech recognition task. They conclude that current learning algorithms are a better fit for deeper architectures and that shallow models can benefit from improved optimization techniques. The experimental results also show that shallow models are able to represent the same function as DNNs/CNNs. To my knowledge, training an SNN to mimic a DNN/CNN through model compression has not been explored before and the authors seem to be getting good results at least on the simple TIMIT task. It remains to be seen if their technique scales up to large vocabulary tasks such as Switchboard and Broadcast News transcription. This being said, a few critiques come to mind: - The authors discuss factoring the weight matrix between input and hidden units and present it as being a novel idea. They should be aware of the following papers: T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy and B. Ramabhadran, 'Low-Rank Matrix Factorization for Deep Neural Network Training with High-Dimensional Output Targets,' in Proc. ICASSP, May 2013. Jian Xue, Jinyu Li, Yifan Gong, 'Restructuring of Deep Neural Network Acoustic Models with Singular Value Decomposition', in Proc. Interspeech 2013. - It is unclear whether the SNN-MIMIC models from Table 1 use any factoring of the weight matrix. If yes, what is k? - It is unclear what targets were used to train the SNN-MIMIC models: DNN or CNN? I assume CNN but it would be good to specify. - On page 2 the feature extraction for speech appears to be incomplete. Are the features logmel or MFCCs? In either case, the log operation appears to be missing. - On page 2 you claim that Table 1 shows results for 'ECNN' which is undefined.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
siDqiQjb6wigv
review
1,393,335,480,000
Fav_FXoOhRFOQ
[ "everyone" ]
[ "Jost Tobias Springenberg" ]
ICLR.cc/2014/workshop
2014
review: Hey, cool Paper! After reading through it carefully I however have one issue with it. The way you present your results in Table 1 seems a bit misleading to me. On first sight I presumed that the mimic network containing 12M parameters was trained to mimic the DNN of the same size while the large network with 140M connections was trained to mimic the CNN with 13M parameters (as is somewhat suggested in your comparison, i.e. by them achieving similar performance). However, as you state in your paper both networks are actually trained to mimic an ensemble of networks with size and performance unknown to the reader. In your response to the reviewers you mention that the mimic network always performs 2-3 % worse than the ensemble. This, to me, suggests that the ensemble performs considerably better than the best CNN you trained. Given that my interpretation is correct the performance of the ensemble should be mentioned in the text and it should be clarified in the table that the mimic networks are trained to mimic this ensemble. Furthermore, assuming a 2 percent gap between the ensemble and the mimic network it is possible that trianing i.e. a three layer network, containing the same number of parameters, could shorten this gap. That is, one could imagine a deeper mimic network to actually perform better than the shallow mimic network (as it is in not nearly perfectly mimicing the ensemble). I think this should be tested and reported alongside your results (if I read your comments to the reviewers correctly you have tried, and succeeded, to train deep networks with fewer parameters to mimic larger ones, strongly hinting that this might be a viable strategy for mimicing the ensemble).
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
88XABsMFS5BZk
comment
1,389,354,660,000
AH9vZzWqgrHV-
[ "everyone" ]
[ "Jimmy Ba" ]
ICLR.cc/2014/workshop
2014
reply: David, thank you for your comments. We submitted a revised draft on Jan 3 that addressed some of your concerns. We’re sorry you read the earlier, rougher draft. You are correct that we are not able to train a shallow net to mimic the CNN model using a similar number of parameters as the CNN model, and the text has been edited to reflect this. We believe that if we had a large (> 100M) unlabelled data set drawn from the same distribution as TIMIT that we would be able to train a shallow model with less than ~15X as many parameters to mimic the CNN with high fidelity, but are unable to test that hypothesis on TIMIT and are now starting experiments on another problem where we will have access to virtually unlimited unlabelled data. But we agree that the number of parameters in the shallow model will not be as small as the number of parameters in the CNN because the weight sharing of the local receptive fields in the CNN allows it to accomplish more with a small number of weights than can be accomplished with one fully-connected hidden layer. Note that the primary argument in the paper, that it is possible to train a shallow neural net (SNN) to be as accurate as a deeper, fully-connected feedforward net (DNN), does not depend on being able to train an SNN to mimic a CNN with the same number of parameters as the CNN. We view the fact that a large SNN can mimic the CNN without the benefit of the convolutional architecture as an interesting, but secondary issue. Thank you again for your comments. We agree with everything you said.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
XXxADkylAkDl2
comment
1,392,771,960,000
yxJGyrO9Y1LFo
[ "everyone" ]
[ "Jimmy Ba" ]
ICLR.cc/2014/workshop
2014
reply: Thank you for the comments. We completely agree that more results are needed to support the conclusions, and this is why we submitted an extended abstract instead of full paper. More experiments are underway, but we don't yet have final results to add to the abstract. Preliminary results suggest that on TIMIT the MIMIC models are not as accurate as the teacher models mainly because we do not have enough unlabeled TIMIT data to capture the function of the teacher models, as opposed to because the MIMIC models have too little capacity or cannot learn a complex function in one layer. Preliminary results also suggest that: 1) the key to making the shallow MIMIC model more accurate is to train it to be more similar to the deep teacher net, and 2) the MIMIC model is better able to learn to mimic the teacher model when trained on logit (the unnormalized log probabilities) than on the softmax outputs from the teacher net. The only reason for including the linear layer between the input and non-linear hidden layer is to make training of the shallow model faster, not to increase accuracy. Experiments suggest that for TIMIT there is little benefit from using more than 250 linear units. We agree with papers such as Seide, Li, and Yu, that shallow nets perform worse than deep nets given the same # of parameters when trained with the current training algorithms. It is possible that, as Yoshua Bengio suggests, deep models provide a better prior than shallow models for complex learning problems. It is also possible that other training algorithms and regularization methods would allow shallow models to work as well. Or it may be a mix of the two. We believe the question of whether models must be deep to achieve extra accuracy is as yet open, and our experiments on TIMIT provide one data point that suggests it *might* be possible to train shallow models that are as accurate as deeper models on these problems. We have tried using some of the MIMIC techniques to improve the accuracy of deep models. With the MIMIC techniques we have been able to train deep models with fewer parameters that are as accurate as deep models with more parameters (i.e., reduce the number of weights and number of layers needed in the deep models), but we have not been able to achieve significant increases in accuracy for the deep models. If compression is done well, the mimic model will be as accurate as the teacher model, but usually not more accurate, because the MIMIC process tries to duplicate the function (I/O behavior) learned by the teacher model in the smaller student model.