Dataset Viewer
forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%. | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | conferencePoster-iclr2013-workshop | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"OgesTW8qZ5TWn",
"PnfD3BSBKbnZh",
"yA-tyFEFr2A5u",
"7jyp7wrwSzagb"
],
"note_type": [
"review",
"review",
"review",
"review"
],
"note_created": [
1363419120000,
1362079260000,
1362246000000,
1363419120000
],
"note_signatures": [
[
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
],
[
"anonymous reviewer 75b8"
],
[
"anonymous reviewer 7e51"
],
[
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
]
],
"structured_content_str": [
"{\"review\": \"We thank the reviewers for their comments and agree with most of them.\\n\\n- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012). \\n Experimental results show that our model also outperforms this model in terms of ranking & classification.\\n\\n- We didn't report the results on the original data because of the issues of overlap between training and testing set. \\n 80.23% of the examples in the testing set appear exactly in the training set.\\n 99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric. \\n Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model.\\n\\n- The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking. \\n\\n- To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper. \\n\\n- The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'.\"}",
"{\"title\": \"review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and\\n Semantic Word Vectors\", \"review\": \"- A brief summary of the paper's contributions, in the context of prior work.\\n\\nThis paper proposes a new energy function (or scoring function) for ranking pairs of entities and their relationship type. The energy function is based on a so-called Neural Tensor Network, which essentially introduces a bilinear term in the computation of the hidden layer input activations of a single hidden layer neural network. A favorable comparison with the energy-function proposed in Bordes et al. 2011 is presented.\\n\\n- An assessment of novelty and quality.\\n\\nThis work follows fairly closely the work of Border et al. 2011, with the main difference being the choice of the energy/scoring function. This is an advantage in terms of the interpretability of the results: this paper clearly demonstrates that the proposed energy function is better, since everything else (the training objective, the evaluation procedure) is the same. This is however a disadvantage in terms of novelty as this makes this work somewhat incremental.\\n\\nBordes et al. 2011 also proposed an improved version of their model, using kernel density estimation, which is not used here. However, I suppose that the proposed model in this paper could also be similarly improved.\\n\\nMore importantly, Bordes and collaborators have more recently looked at another type of energy function, in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012), which also involves bilinear terms and is thus similar (but not the same) as the proposed energy function here. In fact, the Bordes et al. 2012 energy function seems to outperform the 2011 one (without KDE), hence I would argue that the former would have been a better baseline for comparisons.\\n\\n- A list of pros and cons (reasons to accept/reject).\", \"pros\": \"Clear demonstration of the superiority of the proposed energy function over that of Bordes et al. 2011.\", \"cons\": \"No comparison with the more recent energy function of Bordes et al. 2012, which has some similarities to the proposed Neural Tensor Networks.\\n\\nSince this was submitted to the workshop track, I would be inclined to have this paper accepted still. This is clearly work in progress (the submitted paper is only 4 pages long), and I think this line of work should be encouraged. However, I would suggest the authors also perform a comparison with the scoring function of Bordes et al. 2012 in future work, using their current protocol (which is nicely setup so as to thoroughly compare energy functions).\"}",
"{\"title\": \"review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and\\n Semantic Word Vectors\", \"review\": \"This paper proposes a new model for modeling data of multi-relational knowledge bases such as Wordnet or YAGO. Inspired by the work of (Bordes et al., AAAI11), they propose a neural network-based scoring function, which is trained to assign high score to plausible relations. Evaluation is performed on Wordnet.\\n\\nThe main differences w.r.t. (Bordes et al., AAAI11) is the scoring function, which now involves a tensor product to encode for the relation type and the use of a non-linearity. It would be interesting if the authors could comment the motivations of their architecture. For instance, what does the tanh could model here?\", \"the_experiments_raise_some_questions\": \"- why do not also report the results on the original data set of (Bordes et al., AAAI11)? Even, is the data set contains duplicates, this stills makes a reference point.\\n- the classification task is hard to motivate. Link prediction is a problem of detection: very few positive to find in huge set of negative examples. Transform that into a balanced classification problem is a non-sense to me.\\n\\nThere have been several follow-up works to (Bordes et al., AAAI11) such as (Bordes et al., AISTATS12) or (Jenatton et al., NIPS12), that should be cited and discussed (some of those involve tensor for coding the relation type as well). Besides, they would also make the experimental comparison stronger.\\n\\nIt should be explained how the pre-trained word vectors trained by the model of Collobert & Weston are use in the model. Wordnet entities are senses and not words and, of course, there is no direct mapping from words to senses. Which heuristic has been used?\", \"pros\": [\"better experimental results\"], \"cons\": [\"skinny experimental section\", \"lack of recent references\"]}",
"{\"review\": \"We thank the reviewers for their comments and agree with most of them.\\n\\n- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012). \\n Experimental results show that our model also outperforms this model in terms of ranking & classification.\\n\\n- We didn't report the results on the original data because of the issues of overlap between training and testing set. \\n 80.23% of the examples in the testing set appear exactly in the training set.\\n 99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric. \\n Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model.\\n\\n- The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking. \\n\\n- To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper. \\n\\n- The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'.\"}"
]
} |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | reject | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"rGZJRE7IJwrK3",
"4Uh8Uuvz86SFd",
"7to37S6Q3_7Qe",
"tb0cgaJXQfgX6",
"7Kq-KFuY-y7S_",
"Qj1vSox-vpQ-U"
],
"note_type": [
"review",
"comment",
"review",
"review",
"review",
"review"
],
"note_created": [
1392852360000,
1363212060000,
1362321600000,
1363477320000,
1365121080000,
1362219360000
],
"note_signatures": [
[
"Charles Martin"
],
[
"Cédric Bény"
],
[
"anonymous reviewer 441c"
],
[
"Aaron Courville"
],
[
"Yann LeCun"
],
[
"anonymous reviewer acf4"
]
],
"structured_content_str": [
"{\"review\": \"It is noted that the connection between RG and multi-scale modeling has been pointed out by Candes in\\n\\nE. J. Cand\\u00e8s, P. Charlton and H. Helgason. Detecting highly oscillatory signals by chirplet path pursuit. Appl. Comput. Harmon. Anal. 24 14-40.\\n\\nwhere it was noted that the multi-scale basis suggested in this convex optimization approach is equivalent to the Wilson basis from his original work on RG theory in the 1970s\"}",
"{\"reply\": \"I have submitted a replacement to the arXiv on March 13, which should be available the same day at 8pm EST/EDT as version 4.\\n\\nIn order to address the first issue, I rewrote section 2 to make it less confusing, specifically by not trying to be overly general. I also rewrote the caption of figure 1 to make it a nearly self-contained explanation of what the model is for a specific one-dimensional example. The content of section 2 essentially explains what features must be kept for any generalization, and section 3 clarifies why these features are important. \\n\\nConcerning the second issue, I agree that this work is preliminary, and implementation is the next step.\"}",
"{\"title\": \"review of Deep learning and the renormalization group\", \"review\": \"The model tries to relate renormalization group and deep learning, specifically hierarchical Bayesian network. The primary problems are that 1) the paper is only descriptive - it does not explain models clearly and precisely, and 2) it has no numerical experiments showing that it works.\", \"what_it_needs_is_something_like\": \"1) Define the DMRG (or whatever verion of RG you need) and Define the machine learning model. Do these with explicit formulas so reader can know what exactly they are. Things like 'Instead, we only allow for maps \\u03c0j which are local in two important ways: firstly, each input vertex can only causally influence the values associated with the m output vertices that it represents plus all kth degree neighbors of these, where k would typically be small' are very hard to follow.\\n\\n2) Show the mapping between the two models. \\n\\n3) Show what it does on real data and that it does something interesting and/or useful. (Real data e.g. sound signals, images, text,...)\"}",
"{\"review\": \"Reviewer 441c,\\n\\nHave you taken a look at the new version of the paper? Does it go some way to addressing your concerns?\"}",
"{\"review\": \"It seems to me like there could be an interesting connection between approximate inference in graphical models and the renormalization methods.\\n\\nThere is in fact a long history of interactions between condensed matter physics and graphical models. For example, it is well known that the loopy belief propagation algorithm for inference minimizes the Bethe free energy (an approximation of the free energy in which only pairwise interactions are taken into account and high-order interactions are ignored). More generally, variational methods inspired by statistical physics have been a very popular topic in graphical model inference.\\n\\nThe renormalization methods could be relevant to deep architectures in the sense that the grouping of random variable resulting from a change of scale could be be made analogous with the pooling and subsampling operations often used in deep models. \\n\\nIt's an interesting idea, but it will probably take more work (and more tutorial expositions of RG) to catch the attention of this community.\"}",
"{\"title\": \"review of Deep learning and the renormalization group\", \"review\": \"This paper discusses deep learning from the perspective of renormalization groups in theoretical physics. Both concepts are naturally related; however, this relation has not been formalized adequately thus far and advancing this is a novelty of the paper. The paper contains a non-technical and insightful exposition of concepts and discusses a learning algorithm for stochastic networks based on the `multiscale entanglement renormalization ansatz' (MERA). This contribution will potentially evoke the interest of many readers.\"}"
]
} |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions. | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | reject | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"ff2dqJ6VEpR8u",
"kH1XHWcuGjDuU",
"oozAQe0eAnQ1w"
],
"note_type": [
"review",
"review",
"review"
],
"note_created": [
1362252900000,
1361946600000,
1362360840000
],
"note_signatures": [
[
"anonymous reviewer 5a78"
],
[
"anonymous reviewer 9c3f"
],
[
"anonymous reviewer ab3b"
]
],
"structured_content_str": [
"{\"title\": \"review of Switched linear encoding with rectified linear autoencoders\", \"review\": \"In the deep learning community there has been a recent trend in\\nmoving away from the traditional sigmoid/tanh activation function to \\ninject non-linearity into the model. One activation function that has \\nbeen shown to work well in a number of cases is called Rectified \\nLinear Unit (ReLU). \\nBuilding on the prior research, this paper aims to provide an \\nanalysis of what is going on while training networks using these \\nactivation functions, and why do they work. In particular the authors \\nprovide their analysis from the context of training a linear auto-encoder \\nwith rectified linear units on a whitened data. They use a toy dataset in \\n3 dimensions (gaussian and mixture of gaussian) to conduct the analysis. \\nThey loosely test the hypothesis obtained from the toy datasets on the \\nMNIST data.\\n\\nThough the paper starts with a lot of promise, unfortunately it fails to \\ndeliver on what was promised. There is nothing in the paper (no new \\nidea or insight) that is either not already known, or fairly straightforward \\nto see in the case of linear auto-encoders trained using a rectified \\nlinear thresholding unit. Furthermore there are a number of flaws in \\nthe paper. For instance, the analysis of section 3.1 seems to be a bit \\nmis-leading. By definition if one fixes the weight vector w to [1,0] there \\nis no way that the sigmoid can distinguish between x's which are \\ngreater than S for some S. However with the weight vector taking \\narbitrary continuous values, that may not be the case. Besides, the \\npurpose of the encoder is to learn a representation, which can best \\nrepresent the input, and coupled with the decoder can reconstruct it. \\nThe encoder learning an identity function (as is argued in the paper) is not \\nof much use. Finally, the whole analysis of section 3 was based on a \\nlinear auto-encoder, whose encoder-decoder weights were tied. However \\nin the case of MNIST the authors show the filters learnt from an untied \\nweight auto-encoder. There seems to be some disconnect there. \\n\\nIn short the paper does not offer any novel insight or idea with respect \\n to learning representation using auto-encoders with rectified linear \\nthresholding function. Various gaps in the analysis also makes it a not \\nvery high quality work.\"}",
"{\"title\": \"review of Switched linear encoding with rectified linear autoencoders\", \"review\": \"This paper analyzes properties of rectified linear autoencoder\\nnetworks. \\n\\nIn particular, the paper shows that rectified linear networks are\\nsimilar to linear networks (ICA). The major difference is the\\nnolinearity ('switching') that allows the decoder to select a subset\\nof features. Such selection can be viewed as a mixture of ICA models.\\n\\nThe paper visualizes the hyperplanes learned for a 3D dataset and\\nshows that the results are sensible (i.e., the learned hyperplanes\\ncapture the components that allow the reconstruction of the data).\", \"some_comments\": \"- On the positive side, I think that the paper makes a interesting attempt to understand properties of nonlinear networks, which is typically hard because of the nonlinearities. The choice of the activation function (rectified linear) makes such analysis possible. \\n\\n- I understand that the paper is mainly an analysis paper. But I feel\\n that it seems to miss a strong key thesis. It would be more interesting that the analysis reveals surprising/unexpected results.\\n\\n- The analyses do not seem particularly deep nor surprising. And I do\\n not find that they can advance our field in some way. I wonder if it's possible to make the analysis more constructive so that we can improve our algorithms. Or at least the analyses can reveal certain surprising properties of unsupervised algorithms.\\n\\n- It's unclear the motivation behind the use of rectified linear\\n activation function for analysis. \\n\\n- The paper touches a little bit on whitening. I find the section on\\n this topic is unsatisfying. It would be good to analyse the role of whitening in greater details here too (as claimed by abstract and introduction).\\n\\n- The experiments show that it's possible to learn penstrokes and\\n Gabor filters from natural images. But I think this is no longer\\n novel. And that there are very few practical implications of\\n this work.\"}",
"{\"title\": \"review of Switched linear encoding with rectified linear autoencoders\", \"review\": \"The paper draws links between autoencoders with tied weights and rectified linear units (similar to Glorot et al AISTATS 2011), the triangle k-means and soft-thresholding of Coates et al. (AISTATS 2011 and ICML 2011), and the linear-autoencoder-like ICA learning criterion of Le et al (NIPS 2011).\\nThe first 3 have in common that, for each example, they yield a subset of non-zero (active) hidden units, that result from a simple thresholding. And it is argued that the training objective thus restricted to that subset corresponds to that of Le et al's ICA. Many 2D and 3D graphics with Gaussian data try to convey a geometric intuition of what is going on. \\n\\nI find rather obvious that these methods switch on a different linear basis for each example. The specific conection highlighted with Le et al's ICA work is more interesting, but it only applies if L1 feature sparsity regularization is employed in addition to the rectified linear activation function.\\n\\nAt the present stage, my impression is that this paper mainly reflect on the authors' maturing perception of links between the various methods, together with their building of an intuitive geometric understanding of how they work. But it is not yet ripe and its take home message not clear.\\nWhile its reflections are not without basis or potential interest they are not currently sufficiently formally exposed and read like a set of loosely bundled observations. I think the paper could greatly benefit from a more streamlined central thesis and message with supporting arguments.\\n\\nThe main empirical finding from the small experiments in this paper seems to be that the training criterion tends to yield pairs of opposed (negated) feature vectors. What we should conclude from this is however unclear.\\n \\nThe graphics are too many. Several seem redundant and are not particularly enlightening for our understanding. Also the use of many Gaussian data examples seems a poor choice to highlight or analyse the switching behavior of these 'switched linear coding' techniques (what does switching buy us if a PCA can capture about all there is about the structure?).\"}"
]
} |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity. | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | conferenceOral-iclr2013-workshop | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"EW9REhyYQcESw",
"oxSZoe2BGRoB6",
"8sJwMe5ZwE8uz"
],
"note_type": [
"review",
"review",
"review"
],
"note_created": [
1362202140000,
1362196320000,
1363264440000
],
"note_signatures": [
[
"anonymous reviewer 1024"
],
[
"anonymous reviewer 998c"
],
[
"Oriol Vinyals, Yangqing Jia, Trevor Darrell"
]
],
"structured_content_str": [
"{\"title\": \"review of Why Size Matters: Feature Coding as Nystrom Sampling\", \"review\": \"The authors provide an analysis of the accuracy bounds of feature coding + linear classifier pipelines. They predict an approximate accuracy bound given the dictionary size and correctly estimate the phenomenon observed in the literature where accuracy increases with dictionary size but also saturates.\", \"pros\": [\"Demonstrates limitations of shallow models and analytically justifies the use of deeper models.\"]}",
"{\"title\": \"review of Why Size Matters: Feature Coding as Nystrom Sampling\", \"review\": \"This paper presents a theoretical analysis and empirical validation of a novel view of feature extraction systems based on the idea of Nystrom sampling for kernel methods. The main idea is to analyze the kernel matrix for a feature space defined by an off-the-shelf feature extraction system. In such a system, a bound is identified for the error in representing the 'full' dictionary composed of all data points by a Nystrom approximated version (i.e., represented by subsampling the data points randomly). The bound is then extended to show that the approximate kernel matrix obtained using the Nystrom-sampled dictionary is close to the true kernel matrix, and it is argued that the quality of the approximation is a reasonable proxy for the classification error we can expect after training. It is shown that this approximation model qualitatively predicts the monotonic rise in accuracy of feature extraction with larger dictionaries and saturation of performance in experiments.\\n\\nThis is a short paper, but the main idea and analysis are interesting. It is nice to have some theoretical machinery to talk about the empirical finding of rising, saturating performance. In some places I think more detail could have been useful.\\n\\nOne undiscussed point is the fact that many dictionary-learning methods do more than populate the dictionary with exemplars so it's possible that a 'learning' method might do substantially better (perhaps reaching top performance much sooner). This doesn't appear to be terribly important in low-dimensional spaces where sampling strategies work about as well as learning, but could be critical for high-dimensional spaces (where sampling might asymptote much more slowly than learning). It seems worth explaining the limitations of this analysis and how it relates to learning. \\n\\nA few other questions / comments:\\n\\nThe calibration of constants for the bound in the experiments was not clear to me. How is the mapping from the bound (Eq. 2) to classification accuracy actually done?\\n\\nThe empirical validation of the lower bound relies on a calibration procedure that, as I understand it, effectively ends up rescaling a fixed-shape curve to fit observed trend in accuracy on the real problem. As a result, it seems like we could come up with a 'nonsense' bound that happened to have such a shape and then make a similar empirical claim. Is there a way to extend the analysis to rule this out? Or perhaps I misunderstand the origin of the shape of this curve.\", \"pros\": \"(1) A novel view of feature extraction that appears to yield a reasonable explanation for the widely observed performance curves of these methods is presented. I don't know how much profit this view might yield, but perhaps that will be made clear by the 'overshooting' method foreshadowed in the conclusion.\\n(2) A pleasingly short read adequate to cover the main idea. (Though a few more details might be nice.)\", \"cons\": \"(1) How this bound relates to the more common case of 'trained' dictionaries is unclear.\\n(2) The empirical validation shows the basic relationship qualitatively, but it is possible that this does not adequately validate the theoretical ideas and their connection to the observed phenomenon.\"}",
"{\"review\": \"We agree with the reviewer regarding the existence of better dictionary learning methods, and note that many of these are also related to corresponding advanced Nystrom sampling methods, such as [Zhang et al. Improved Nystrom low-rank approximation and error analysis. ICML 08]. These methods could improve performance in absolute terms, but that is an orthogonal issue to our main results. Nonetheless, we think this is a valuable observation, and will include a discussion of these points in the final version of this paper.\\n\\nThe relationship between a kernel error bound and classification accuracy is discussed in more detail in [Cortes et al. On the Impact of Kernel Approximation on Learning Accuracy. AISTATS 2010]. The main result is that the bounds are proportional, verifying our empirical claims. We will add this reference to the paper.\\n\\nRegarding the comment on fitting the shape of the curve, we are only using the first two points to fit the 'constants' given in the bound, so the fact that it extrapolates well in many tasks gives us confidence that the bound is accurate.\"}"
]
} |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | conferencePoster-iclr2013-conference | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"RzSh7m1KhlzKg",
"FFkZF49pZx-pS",
"MqwZf2jPZCJ-n",
"oo1KoBhzu3CGs",
"aplzZcXNokptc",
"EW5mE9upmnWp1"
],
"note_type": [
"review",
"review",
"review",
"review",
"review",
"review"
],
"note_created": [
1363574460000,
1362210360000,
1363744920000,
1362192540000,
1363615980000,
1362382860000
],
"note_signatures": [
[
"Hugo Van hamme"
],
[
"anonymous reviewer 4322"
],
[
"Hugo Van hamme"
],
[
"anonymous reviewer 57f3"
],
[
"Hugo Van hamme"
],
[
"anonymous reviewer 482c"
]
],
"structured_content_str": [
"{\"review\": \"I would like to thank the reviewers for their investment of time and effort to formulate their valued comments. The paper was updated according to your comments. Below I address your concerns:\\n\\nA common remark is the lack of comparison with state-of-the-art NMF solvers for Kullback-Leibler divergence (KLD). I compared the performance of the diagonalized Newton algorithm (DNA) with the wide-spread multiplicative updates (MU) exactly because it is the most common baseline and almost every algorithm has been compared against it. As you suggested, I did run comparison tests and I will present the results here. I need to find a method to post some figures to make the point clear. First, I compared against the Cyclic Coordinate Descent (CCD) by Hsieh & Dhillon using the software they provide on their website. I ran the synthetic 1000x500 example (rank 10). The KLD as a function of iteration number for DNA and CCD are very close (I did not find a way to post a plot on this forum). However, in terms of CPU (ran on the machine I mention in the paper) DNA is a lot faster with about 200ms per iteration for CCD and about 50ms for DNA. Note that CCD is completely implemented in C++ (embedded in a mex-file) while DNA is implemented in matlab (with one routine in mex - see the download page mentioned in the paper). As for the comparison with SBCD (scalar block coordinate descent), I also ran their code on the same example, but unfortunately, one of the matrix factors is projected to an all-zero matrix in the first iteration. I have not found the cause yet.\\nWhat definitely needs investigation is that I observe CCD to be 4 times slower than DNA. Using my implementation for MU, 1200 MU iterations are actually as fast as the 100 CCD iteration. (My matlab MU implementation is 10 times faster than the one provided by Hsieh&Dhillon). For these reasons, I am not too keen on quickly including a comparison in terms of CPU time (which is really the bottom line), as implementation issues seem not so trivial. Even more so for a comparison on a GPU, where the picture could be different from the CPU for the cyclic updates in CCD. A thorough comparison on these two architectures seems like a substantial amount of future work. But I hope the data above data convince you the present paper and public code are significant work. \\n\\nReply to Anonymous 57f3\\n' it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)'\\n\\n>> There are practical problems where NMF takes hours, e.g. the problems of [6], which is essentially learning a speech recognizer model from data. We are now applying NMF-based speech recognition in learning paradigms that learn from user interaction examples. In such cases, you want to wait seconds, not minutes. Also, there is an increased interest in 'large-sccale NMF problems'.\\n\\n'Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? '\\n\\n>> Notice that the second and third term in the expression for KLD (Eq. 1) are normalization terms such that we don't require V or Z to sum to unity. This very common in the NMF literature, and was motivated in a.o. [1]. KLD is appropriate if the data follow a (mixture of) Poisson distribution. While this is realistic for counts data (like in the Newsgroup corpus), the KLD is also applied on Fourier spectra, e.g. for speaker separation or speech enhancement, with success. Imho, the relevance of KLD does not need to be motivated in a paper on algorithms, see also [18] and [20] ( numbering in the new paper).\\n\\n'I understand that this is a formulation used in previous work ([1]), but it should be briefly explained. '\\n>> Added a sentence about the Poisson hypothesis after Eq. 1.\\n\\n'You should explain the connection between your work and [17] more carefully. Exactly how is it similar/different? '\\n>> Reformulated. [17] (now [18]) uses a totally different motivation, but also involves the second order derivatives, like a Newton method.\\n\\n'Has a diagonal Newton-type approach ever been used for the squared error objective? '\\n>> A reference is given now. Note however that KLD behaves substantially different.\\n'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)'\\n 'the variables required to compute' -> 'the quantities required to compute' \\n>> corrected\\n\\nYou should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence? \\n>> Yes. A lot better. Corrected.\\n\\n'Have you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal. '\\n>> yes. I initially tried to control the damping by adding lambda*I to the Hessian, where lambda is decreased on success and increased if the KLD increases. I found it difficult to find a setting that worked well on a variety of problems. \\n\\nI would recommend using '\\top' to denote matrix transposition instead of what you are doing. Section 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. \\n>> That's indeed what I meant to say. 'is' => 'can be'\\n\\nYou say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. \\n>> I meant this holds for any solution of (5). This is corrected.\\n\\nAnd I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!). \\n>> A Newton update may yield a guess that does not satisfy this (weak) necessary condition. We can satisfy this condition easily with the renormalization (10), which is reflected in steps 16 and 29.\\n\\nI don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct. 'floor resp. ceiling'??\\n>> 'Therefore' => 'To respect the nonnegativity and to avoid the singularity\\u201d\\n\\nReply to Anonymous 4322\\nSee comparison described above.\\nI added more about the differences with the prior work you mention.\\n\\nReply to Anonymous 482c\\nSee also comparison data detailed above.\\nYou are right there is a lot of generic work on Hessian preconditioning. I refer to papers that work on damping and line search in the context of NMF ([10], [11], [12], [14] ...). Diagonalization is only related in the sense that it ensures the Hessian to be positive definite (not in general, but here is does).\"}",
"{\"title\": \"review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization\", \"review\": \"Summary:\\n\\nThe paper presents a new algorithm for solving L1 regularized NMF problems in which the fitting term is the Kullback-Leiber divergence. The strategy combines the classic multiplicative updates with a diagonal approximation of Newton's method for solving the KKT conditions of the NMF optimization problem. This approximation results in a multiplicative update that is computationally light. Since the objective function might increase under the Newton updates, the author proposes to simultaneously compute both multiplicative and Newton updates and choose the one that produces the largest descent. The algorithm is tested on several datasets, generally producing improvements in both number of iterations and computational time with respect to the standard multiplicative updates.\\n\\nI believe that the paper is well written. It proposes an efficient optimization algorithm for solving a problem that is not novel but very important in many applications. The author should highlight the strengths of the proposed approach and the differences with recent works presented in the literature.\\n\\nPros.:\\n\\n- the paper addresses an important problem in matrix factorization,\\nextensively used in audio processing applications\\n- the experimental results show that the method is more efficient than the multiplicative algorithm (which is the most widely used optimization tool), without significantly increasing the algorithmic complexity\", \"cons\": [\"experimental comparisons against related approaches is missing\", \"this approach seems limited to only work for the Kullback-Leiber\", \"divergence as fitting cost.\"], \"general_comments\": \"I believe that the paper lacks of experimental comparisons with other accelerated optimization schemes for solving the same problem. In particular, I believe that the author should include comparisons with [17] and the work,\\n\\nC.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD, pages 1064\\u20131072, 2011.\\n\\nwhich should also be cited.\\n\\nAs the author points out, the approach in [17] is very similar to the one proposed in this paper (they have code available online). The work by Hsieh and Dhillon is also very related to this paper. They propose a coordinate descent method using Newton's method to solve the individual one-variable sub-problems. More details on the differences with these two works should be provided in Section 1.\\n\\nThe experimental setting itself seems convincing. Figures 2 and 3 are never cited in the paper.\"}",
"{\"review\": \"First: sorry for the multiple postings. Browser acting weird. Can't remove them ...\", \"update\": \"I was able to get the sbcd code to work. Two mods required (refer to Algorithm 1 in the Li, Lebanon & Park paper - ref [18] in v2 paper on arxiv):\\n1) you have to be careful with initialization. If the estimates for W or H are too large, E = A - WH could potentially contain too many zeros in line 3 and the update maps H to all zeros. Solution: I first perform a multiplicative update on W and H so you have reasonably scaled estimates.\\n2) line 16 is wrongly implemented in the publicly available ffhals5.m \\n\\nI reran the comparison (different machine though - the one I used before was fully loaded):\\n1) CCD (ref [17]) - the c++ code compiled to a matlab mex file as downloaded from the author's website and following their instructions. \\n2) DNA - fully implemented in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/\\n3) SBCD (ref [18]) - code fully in matlab with mods above\\n4) MU (multiplicative updates) - implementation fully in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/\", \"the_kld_as_a_function_of_the_iteration_for_the_rank_10_random_1000x500_matrix_is_shown_in__https\": \"//dl.dropbox.com/u/915791/iteration.pdf.\\nWe observe that SBCD takes a good start but then slows down. DNA is best after the 5th iteration.\", \"the_kld_as_a_function_of_cpu_time_is_shown_in_https\": \"//dl.dropbox.com/u/915791/time.pdf\\nDNA is the clear winner, followed by MU which beats both SBCD and CCD. This may be surprising, but as I mentioned earlier, there are some implementation issues. CCD is a single-thread implementation, while matlab is multi-threaded and works in parrallel. However, the cyclic updates in CCD are not very suitable for parallelization. The SBCD needs reimplementation, honestly.\\n\\nIn summary, DNA does compare favourably to the state-of-the-art, but I don't really feel comfortable about including such a comparison in a scientific paper if there is such a dominant effect of programming style/skills on the result.\"}",
"{\"title\": \"review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization\", \"review\": \"This paper develops a new iterative optimization algorithm for performing non-negative matrix factorization, assuming a standard 'KL-divergence' objective function. The method proposed combines the use of a traditional updating scheme ('multiplicative updates' from [1]) in the initial phase of optimization, with a diagonal Newton approach which is automatically switched to when it will help. This switching is accomplished by always computing both updates and taking whichever is best, which will typically be MU at the start and the more rapidly converging (but less stable) Newton method towards the end. Additionally, the diagonal Newton updates are made more stable using a few tricks, some of which are standard and some of which may not be. It is found that this can provide speed-ups which may be mild or significant, depending on the application, versus a standard approach which only uses multiplicative updates. As pointed out by the authors, Newton-type methods have been explored for non-negative matrix factorization before, but not for this particularly objective with a diagonal approximation (except perhaps [17]?).\\n\\nThe writing is rough in a few places but okay overall. The experimental results seem satisfactory compared to the classical algorithm from [1], although comparisons to other potentially more recent approaches is conspicuously absent. I'm not an experiment on matrix factorization or these particular datasets so it's hard for me to independently judge if these results are competitive with state of the art methods.\\n\\nThe paper doesn't seem particularly novel to me, but matrix factorization isn't a topic I find particularly interesting, so this probably biases me against the paper somewhat.\", \"pros\": [\"reasonably well presented\", \"empirical results seem okay\"], \"cons\": [\"comparisons to more recent approaches is lacking\", \"it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)\", \"writing is rough in a few places\"], \"detailed_comments\": \"Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? I understand that this is a formulation used in previous work ([1]), but it should be briefly explained.\\n\\nYou should explain the connection between your work and [17] more carefully. Exactly how is it similar/different?\\n\\nHas a diagonal Newton-type approach ever been used for the squared error objective?\\n\\n'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)'\\n\\n'the variables required to compute' -> 'the quantities required to compute'\\n\\nYou should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence?\\n\\nHave you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal.\\n\\nI would recommend using '\\top' to denote matrix transposition instead of what you are doing.\\n\\nSection 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. Also, are these derivation prior work (e.g. from [1])? If so, it should be stated.\\n\\nIt's hard to follow the derivations in this section. You say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. And I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!).\\n\\nI don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct.\\n\\n'floor resp. ceiling'??\"}",
"{\"review\": \"About the comparison with Cyclic Coordinate Descent (as described in C.-J. Hsieh and I. S. Dhillon, \\u201cFast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization,\\u201d in proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), San Diego, CA, USA, August 2011) using their software:\", \"the_plots_of_the_kld_as_a_function_of_iteration_number_and_cpu_time_are_located_at_https\": \"//dl.dropbox.com/u/915791/iteration.pdf and https://dl.dropbox.com/u/915791/time.pdf\\nThe data is the synthetic 1000x500 random matrix of rank 10. They show DNA has comparable convergence behaviour and the implementation is faster, despite it's matlab (DNA) vs. c++ (CCD).\"}",
"{\"title\": \"review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization\", \"review\": \"Overview:\\n\\nThis paper proposes an element-wise (diagonal Hessian) Newton method to speed up convergence of the multiplicative update algorithm (MU) for NMF problems. Monotonic progress is guaranteed by an element-wise fall-back mechanism to MU. At a minimal computational overhead, this is shown to be effective in a number of experiments. \\n\\nThe paper is well-written, the experimental validation is convincing, and the author provides detailed pseudocode and a matlab implementation.\", \"comments\": \"There is a large body of related work outside of the NMF field that considers diagonal Hessian preconditioning of updates, going back (at least) as early as Becker & LeCun in 1988.\\n\\nSwitching between EM and Newton update (using whichever is best, element-wise) is an interesting alternative to more classical forms of line search: it may be worth doing a more detailed comparison to such established techniques.\\n\\nI would appreciate a discussion of the potential of extending the idea to non KL-divergence costs.\"}"
]
} |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images. | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | conferenceOral-iclr2013-workshop | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"UgMKgxnHDugHr",
"88s34zXWw20My",
"ddIxYp60xFd0m",
"SSiPd5Rr9bdXm"
],
"note_type": [
"review",
"review",
"review",
"review"
],
"note_created": [
1362080640000,
1362001800000,
1363754820000,
1363754760000
],
"note_signatures": [
[
"anonymous reviewer cfb0"
],
[
"anonymous reviewer 310e"
],
[
"Richard Socher"
],
[
"Richard Socher"
]
],
"structured_content_str": [
"{\"title\": \"review of Zero-Shot Learning Through Cross-Modal Transfer\", \"review\": \"*A brief summary of the paper's contributions, in the context of prior work*\\nThis paper introduces a zero-shot learning approach to image classification. The model first tries to detect whether an image contains an object from a so-far unseen category. If not, the model relies on a regular, state-of-the art supervised classifier to assign the image to known classes. Otherwise, it attempts to identify what this object is, based on a comparison between the image and each unseen class, in a learned joint image/class representation space. The method relies on pre-trained word representations, extracted from unlabelled text, to represent the classes. Experiments evaluate the compromise between classification accuracy on the seen classes and the unseen classes, as a threshold for identifying an unseen class is varied. \\n\\n*An assessment of novelty and quality*\\nThis paper goes beyond the current work on zero-shot learning in 2 ways. First, it shows that very good classification of certain pairs of unseen classes can be achieved based on learned (as opposed to hand designed) representations for these classes. I find this pretty impressive.\\n\\nThe second contribution is in a method for dealing with seen and unseen classes, based on the idea that unseen classes are outliers. I've seen little work attacking directly this issue. Unfortunately, I'm not super impressed with the results: having to drop from 80% to 70% to obtain between 15% and 30% accuracy on unseen classes (and only for certain pairs) is a bit disappointing. But it's a decent first step. Plus, the proposed model is overall fairly simple, and zero-shot learning is quite challenging, so in fact it's perhaps surprising that a simple approach doesn't do worse.\\n\\nFinally, I find the paper reads well and is quite clear in its methodology.\\n\\nI do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'. This sentence suggests there is a theoretical contribution to this work, which I don't see. So I would remove that sentence.\\n\\nAlso, the second paragraph of section 6 is incomplete.\\n\\n*A list of pros and cons (reasons to accept/reject)*\", \"the_pros_are\": [\"attacks an important, very hard problem\", \"goes significantly beyond the current literature on zero-shot learning\", \"some of the results are pretty impressive\"], \"the_cons_are\": [\"model is a bit simple and builds quite a bit on previous work on image classification [6] and unsupervised learning of word representation [15] (but frankly, that's really not such a big deal)\"]}",
"{\"title\": \"review of Zero-Shot Learning Through Cross-Modal Transfer\", \"review\": \"- The idea of learning a joint embedding of images and classes is not new, but is nicely explained\\nin the paper.\\n- the authors relate to other works on zero-shot learning. I have not seen references to similarity learning,\\n which can be used to say if two images are of the same class. These can obviously be used to determine\\n if an image is of a known class or not, without having seen any image of the class.\\n- The proposed approach to estimate the probability that an image is of a known class or not is based\\n on a mixture of Gaussians, where one Gaussian is estimated for each known class where the mean is\\n the embedding vector of the class and the standard deviation is estimated on the training samples of\\n that class. I have a few concerns with this:\\n * I wonder if the standard deviation will not be biased (small) since it is estimated on the training\\n samples. How important is that?\\n * I wonder if the threshold does not depend on things like the complexity of the class and the number\\n of training examples of the class. In general, I am not convinced that a single threshold can be used\\n to estimate if a new image is of a new class. I agree it might work for a small number of well\\n separate classes (like CIFAR-10), but I doubt it would work for problems with thousands of classes\\n which obviously are more interconnected to each other.\\n- I did not understand what to do when one decides that an image is of an unknown class. How should it\\n be labeled in that case?\\n- I did not understand why one needs to learn a separate classifier for the known classes, instead of\\n just using the distance to the known classes in the embedding space.\"}",
"{\"review\": [\"We thank the reviewers for their feedback.\", \"I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.\", \"Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be?\", \"I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that?\", \"We tried fitting a general covariance matrix and it decreases performance.\", \"I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class.\", \"It might be and we notice that different thresholds should be selected via cross validation.\", \"In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class.\", \"Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions.\", \"I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case?\", \"Using the distances to the word vectors of the unknown classes.\", \"I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.\", \"reply.\", \"The discriminative classifiers have much higher accuracy than the simple distances for known classes.\", \"I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'.\", \"Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon).\"]}",
"{\"review\": [\"We thank the reviewers for their feedback.\", \"I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.\", \"Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be?\", \"I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that?\", \"We tried fitting a general covariance matrix and it decreases performance.\", \"I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class.\", \"It might be and we notice that different thresholds should be selected via cross validation.\", \"In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class.\", \"Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions.\", \"I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case?\", \"Using the distances to the word vectors of the unknown classes.\", \"I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.\", \"reply.\", \"The discriminative classifiers have much higher accuracy than the simple distances for known classes.\", \"I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'.\", \"Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon).\"]}"
]
} |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | conferenceOral-iclr2013-conference | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"eG1mGYviVwE-r",
"EHF-pZ3qwbnAT",
"sPw_squDz1sCV",
"Rny5iXEwhGnYN",
"O3uWBm_J8IOlG",
"Av10rQ9sBlhsf",
"oCzZPts6ZYo6d",
"p7BE8U1NHl8Tr",
"zV1YApahdwAIu"
],
"note_type": [
"comment",
"review",
"review",
"comment",
"comment",
"comment",
"review",
"review",
"comment"
],
"note_created": [
1363730760000,
1362609900000,
1363536060000,
1362095760000,
1363731300000,
1363643940000,
1362211680000,
1361997540000,
1362352080000
],
"note_signatures": [
[
"Alan L. Yuille, Roozbeh Mottaghi"
],
[
"anonymous reviewer a9e8"
],
[
"Aaron Courville"
],
[
"Alan L. Yuille, Roozbeh Mottaghi"
],
[
"Alan L. Yuille, Roozbeh Mottaghi"
],
[
"anonymous reviewer c1e8"
],
[
"anonymous reviewer 915e"
],
[
"anonymous reviewer c1e8"
],
[
"Alan L. Yuille, Roozbeh Mottaghi"
]
],
"structured_content_str": [
"{\"reply\": \"Okay, thanks. We understand your viewpoint.\"}",
"{\"title\": \"review of Complexity of Representation and Inference in Compositional Models with\\n Part Sharing\", \"review\": \"This paper explores how inference can be done in a part-sharing model and the computational cost of doing so. It relies on 'executive summaries' where each layer only holds approximate information about the layer below. The authors also study the computational complexity of this inference in various settings.\\n\\nI must say I very much like this paper. It proposes a model which combines fast and approximate inference (approximate in the sense that the global description of the scene lacks details) with a slower and exact inference (in the sense that it allows exact inference of the parts of the model). Since I am not familiar with the literature, I cannot however judge the novelty of the work.\", \"pros\": [\"model which attractively combines inference at the top level with inference at the lower levels\", \"the analysis of the computational complexity for varying number of parts and objects is interesting\", \"the work is very conjectural but I'd rather see it acknowledged than hidden under toy experiments.\"], \"cons\": \"\"}",
"{\"review\": \"Reviewer c1e8,\\n\\nPlease read the authors' responses to your review. Do they change your evaluation of the paper?\"}",
"{\"reply\": \"The unsupervised learning will also appear at ICLR. So we didn't describe it in this paper and concentrated instead on the advantages of compositional models for search after the learning has been done.\\n\\nThe reviewer says that this result is not very novel and mentions analogies to complexity gain of large convolutional networks. This is an interesting direction to explore, but we are unaware of any mathematical analysis of convolutional networks that addresses these issues (please refer us to any papers that we may have missed). Since our analysis draws heavily on properties of compositional models -- explicit parts, executive summary, etc -- we are not sure how our analysis can be applied directly to convolutional networks. Certain aspects of our analysis also are novel to us -- e.g., the sharing of parts, the parallelization. \\n\\nIn summary, although it is plausible that compositional models and convolutional nets have good scaling properties, we are unaware of any other mathematical results demonstrating this.\"}",
"{\"reply\": \"Thanks for your comments. The paper is indeed conjectural which is why we are submitting it to this new type of conference. But we have some proof of content from some of our earlier work -- and we are working on developing real world models using these types of ideas.\"}",
"{\"reply\": \"Sorry: I should have written 'although I do not see it as very surprising' instead of 'novel'.\\n\\nThe analogy with convolutional networks is that quantities computed by low-level nodes can be shared by several high level nodes. This is trivial in the case of conv. nets, and not trivial in your case because you have to organize the search algorithm in a manner that leverages this sharing.\\n\\nBut I still like your paper because it gives 'a self-contained description of a sophisticated and conceptually sound object recognition system'. Although my personal vantage point makes the complexity result less surprising, the overall achievement is non trivial and absolutely worth publishing.\"}",
"{\"title\": \"review of Complexity of Representation and Inference in Compositional Models with\\n Part Sharing\", \"review\": \"This paper presents a complexity analysis of certain inference algorithms for compositional models of images based on part sharing.\\nThe intuition behind these models is that objects are composed of parts and that each of these parts can appear in many different objects; \\nwith sensible parallels (not mentioned explicitly by the authors) to typical sampling sets in image compression and to renormalization concepts in physics via model high-level executive summaries. \\nThe construction of hierarchical part dictionaries is an important and in my appreciation challenging prerequisite, but this is not the subject of the paper. \\n\\nThe authors discuss an approach for object detection and object-position inference exploiting part sharing and dynamic programming, \\nand evaluate its serial and parallel complexity. The paper gathers interesting concepts and presents intuitively-sound theoretical results that could be of interest to the ICLR community.\"}",
"{\"title\": \"review of Complexity of Representation and Inference in Compositional Models with\\n Part Sharing\", \"review\": \"The paper describe a compositional object models that take the form of a hierarchical generative models. Both object and part models provide (1) a set of part models, and (2) a generative model essentially describing how parts are composed. A distinctive feature of this model is the ability to support 'part sharing' because the same part model can be used by multiple objects and/or in various points of the object hierarchical description. Recognition is then achieved with a Viterbi search. The central point of the paper is to show how part sharing provides opportunities to reduce the computational complexity of the search because computations can be reused.\\n\\nThis is analogous to the complexity gain of a large convolutional network over a sliding window recognizer of similar architecture. Although I am not surprised by this result, and although I do not see it as very novel, this paper gives a self-contained description of a sophisticated and conceptually sound object recognition system. Stressing the complexity reduction associated with part sharing is smart because the search complexity became a central issue in computer vision. On the other hand, the unsupervised learning of the part decomposition is not described in this paper (reference [19]) and could have been relevant to ICLR.\"}",
"{\"reply\": \"We hadn't thought of renormalization or image compression. But renormalization does deal with scale (I think B. Gidas had some papers on this in the 90's). There probably is a relation to image compression which we should explore.\"}"
]
} |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | conferenceOral-iclr2013-conference | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"qO9gWZZ1gfqhl",
"tG4Zt9xaZ8G5D",
"OOB_F66xrPKGA",
"Ub0AUfEOKkRO1",
"VVbCVyTLqczWn",
"2-VeRGGdvD-58"
],
"note_type": [
"review",
"comment",
"comment",
"review",
"comment",
"review"
],
"note_created": [
1362163380000,
1363298100000,
1363297980000,
1362368040000,
1363297440000,
1362213660000
],
"note_signatures": [
[
"anonymous reviewer 777f"
],
[
"Camille Couprie"
],
[
"Camille Couprie"
],
[
"anonymous reviewer 5193"
],
[
"Camille Couprie"
],
[
"anonymous reviewer 03ba"
]
],
"structured_content_str": [
"{\"title\": \"review of Indoor Semantic Segmentation using depth information\", \"review\": \"Segmentation with multi-scale max pooling CNN, applied to indoor vision, using depth information. Interesting paper! Fine results.\", \"question\": \"how does that compare to multi-scale max pooling CNN for a previous award-winning application, namely, segmentation of neuronal membranes (Ciresan et al, NIPS 2012)?\"}",
"{\"reply\": \"Thank you for your review and helpful comments. We computed and added error bars as suggested in Table 1. However, computing standard deviation for the individual means per class of objects does not apply here: the per class accuracies are not computed image per image. Each number corresponds to a ratio of the total number of correctly classified pixels as a particular class, on the number of pixels belonging to this class in the dataset.\\nFor the pixel-wise accuracy, we now give the standard deviation in Table 1, as well as the median. As the two variances are equal using depth or not, we computed the statistical significance using a two sample t-test, that results in a t statistic equal to 1.54, which is far from the mean performance of 52.2 and thus we can consider that the two reported means are statistically significant. \\n\\nAbout the class-by class improvements displayed in Table 1, we discuss the fact that objects having a constant appearance of depth are in general more inclined to take benefit from depth information. As the major part of the scenes contains categories that respect this property, the improvements achieved using depth involve a smaller number of categories, but a larger volume of data. \\n\\nTo strengthen our comparison of the two networks using or not depth information, we now display the results obtained using only the multiscale network without depth information in Figure 2. \\n\\nWe hope that the changes that we made in the paper (which should be updated within the next 24 hours) answer your concerns.\"}",
"{\"reply\": \"Thank you for your review and helpful comments.\\nThe missing values in the depth acquisition were pre-processed using inpainting code available online on Nathan Siberman\\u2019s web page. We added the reference to the paper.\\n In the paper, we made the observation that the classes for which depth fails to outperform the RGB model are the classes of object for which the depth map does not vary too much. We now stress out better this observation with the addition of some depth maps at Figure 2. \\n\\nThe question you are raising about whether or not the depth is always useful, or if there could be better ways to leverage depth data is a very good question, and at the moment is still un-answered. The current RGBD multiscale network is the best way we found to learn features using depth, now maybe we could improve the system by introducing an appropriate contrast normalization of the depth map, or maybe we could combine the learned features using RGB and the learned features using RGBD\\u2026\"}",
"{\"title\": \"review of Indoor Semantic Segmentation using depth information\", \"review\": \"This work builds on recent object-segmentation work by Farabet et al., by augmenting the pixel-processing pathways with ones that processes a depth map from a Kinect RGBD camera. This work seems to me a well-motivated and natural extension now that RGBD sensors are readily available.\\n\\nThe incremental value of the depth channel is not entirely clear from this paper. In principle, the depth information should be valuable. However, Table 1 shows that for the majority of object types, the network that ignores depth is actually more accurate. Although the averages at the bottom of Table 1 show that depth-enhanced segmentation is slightly better, I suspect that if those averages included error bars (and they should), the difference would be insignificant. In fact, all the accuracies in Table 1 should have error bars on them. The comparisons with the work of Silberman et al. are more favorable to the proposed model, but again, the comparison would be strengthened by discussion of statistical confidence.\\n\\nQualitatively, I would have liked to see the ouput from the convolutional network of Farabet et al. without the depth channel, as a point of comparison in Figures 2 and 3. Without that point of comparison, Figures 2 and 3 are difficult to interpret as supporting evidence for the model using depth.\\n\\nPro(s)\\n- establishes baseline RGBD results with convolutional networks\\n \\nCon(s)\\n- quantitative results lack confidence intervals\\n- qualitative results missing important comparison to non-rgbd network\"}",
"{\"reply\": \"Thank you for your review and pointing out the paper of Ciresan et al., that we added to our list of references. Similarly to us, they apply the idea of using a kind of multi-scale network. However, Ciseran's approach to foveation differs from ours: where we use a multiscale pyramid to provide a foveated input to the network, they artificially blur the input's content, radially, and use non-uniform sampling to connect the network to it. The major advantage of using a pyramid is that the whole pyramid can be applied convolutionally, to larger input sizes. Once the model is trained, it must be applied as a sliding window to classify each pixel in the input. Using their method, which requires a radial blur centered on each pixel, the model cannot be applied convolutionally. This is a major difference, which dramatically impacts test time.\", \"note\": \"Ciseran's 2012 NIPS paper appeared after our first paper (ICML 2012) on the subject.\"}",
"{\"title\": \"review of Indoor Semantic Segmentation using depth information\", \"review\": \"This work applies convolutional neural networks to the task of RGB-D indoor scene segmentation. The authors previously evaulated the same multi-scale conv net architecture on the data using only RGB information, this work demonstrates that for most segmentation classes providing depth information to the conv net increases performance.\\n\\nThe model simply adds depth as a separate channel to the existing RGB channels in a conv net. Depth has some unique properties e.g. infinity / missing values depending on the sensor. It would be nice to see some consideration or experiments on how to properly integrate depth data into the existing model. \\n\\nThe experiments demonstrate that a conv net using depth information is competitive on the datasets evaluated. However, it is surprising that the model leveraging depth is not better in all cases. Discussion on where the RGB-D model fails to outperform the RGB only model would be a great contribution to add. This is especially apparent in table 1. Does this suggest that depth isn't always useful, or that there could be better ways to leverage depth data?\", \"minor_notes\": \"'modalityies' misspelled on page 1\", \"overall\": [\"A straightforward application of conv nets to RGB-D data, yielding fairly good results\", \"More discussion on why depth fails to improve performance compared to an RGB only model would strengthen the experimental findings\"]}"
]
} |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive. | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | conferencePoster-iclr2013-conference | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"LkyqLtotdQLG4",
"o5qvoxIkjTokQ",
"dt6KtywBaEvBC",
"pC-4pGPkfMnuQ"
],
"note_type": [
"review",
"review",
"review",
"review"
],
"note_created": [
1362012600000,
1362294960000,
1362379800000,
1363459200000
],
"note_signatures": [
[
"anonymous reviewer 9212"
],
[
"anonymous reviewer 7e2e"
],
[
"anonymous reviewer 77a7"
],
[
"Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio"
]
],
"structured_content_str": [
"{\"title\": \"review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines\", \"review\": \"The paper describes a Natural Gradient technique to train Boltzman machines. This is essentially the approach of Amari et al (1992) where the Fisher information matrix is expressed in which the authors estimate the Fisher information matrix L with examples sampled from the model distribution using a MCMC approach with multiple chains. The gradient g is estimated from minibatches, and the weight update x is obtained by solving Lx=g with an efficient truncated algorithm. Doing so naively would be very costly because the matrix L is large. The trick is to express L as the covariance of the Jacobian S with respect to the model distribution and take advantage of the linear nature of the sample average to estimate the product Lw in a manner than only requires the storage of the Jacobien for each sample.\\n\\nThis is a neat idea. The empirical results are preliminary but show promise. The proposed algorithm requires less iterations but more wall-clock time than SML. Whether this is due to intrinsic properties of the algorithm or to deficiencies of the current implementation is not clear.\"}",
"{\"title\": \"review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines\", \"review\": \"This paper presents a natural gradient algorithm for deep Boltzmann machines. The authors must be commended for their extremely clear and succinct description of the natural gradient method in Section 2. This presentation is particularly useful because, indeed, many of the papers on information geometry are hard to follow. The derivations are also correct and sound. The derivations in the appendix are classical statistics results, but their addition is likely to improve readability of the paper.\\n\\nThe experiments show that the natural gradient approach does better than stochastic maximum likelihood when plotting estimated likelihood against epochs. However, per unit computation, the stochastic maximum likelihood method still does better. \\n\\nI was not able to understand remark 4 about mini-batches. Why are more parallel chains needed? Why not simply use a single chain but have longer memory. I strongly think this part of the paper could be improved if the authors write down the pseudo-code for their algorithm. Another suggestion is to use automatic algorithm configuration to find the optimal hyper-parameters for each method, given that they are so close.\\n\\nThe trade-offs of second order versus first order optimization methods are well known in the deterministic case. There is is also some theoretical guidance for the stochastic case. I encourage the authors to look at the following papers for this:\\n\\nA Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. N. Le Roux, M. Schmidt, F. Bach. NIPS, 2012. \\n\\nHybrid Deterministic-Stochastic Methods for Data Fitting.\\nM. Friedlander, M. Schmidt. SISC, 2012. \\n\\n'On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning' R. Byrd, G. Chin and W. Neveitt, J. Nocedal.\\nSIAM J. on Optimization, vol 21, issue 3, pages 977-995 (2011).\\n\\n'Sample Size Selection in Optimization Methods for Machine Learning'\\nR. Byrd, G. Chin, J. Nocedal and Y. Wu. to appear in Mathematical Programming B (2012).\\n\\nIn practical terms, given that the methods are so close, how does the choice of implementation (GPUs, multi-cores, single machine) affect the comparison? Also, how data dependent are the results. I would be nice to gain a deeper understanding of the conditions under which the natural gradient might or might not work better than stochastic maximum likelihood when training Boltzmann machines.\\n\\nFinally, I would like to point out a few typos to assist in improving the paper:\", \"page_1\": \"litterature should be literature\\nSection 2.2 cte should be const for consistency.\", \"section_3\": \"Avoid using x instead of grad_N in the linear equation for Lx=E(.) This causes overloading. For consistency with the previous section, please use grad_N instead.\", \"section_4\": \"Add a space between MNIST and [7].\\nAppendix 5.1: State that the expectation is with respect to p_{\\theta}(x).\\nAppendix 5.2: The expectation with respect to q_\\theta should be with respect to p_{\\theta}(x) to ensure consistency of notation, and correctness in this case.\", \"references\": \"References [8] and [9] appear to be duplicates of the same paper by J. Martens.\"}",
"{\"title\": \"review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines\", \"review\": \"This paper introduces a new gradient descent algorithm that combines is based on Hessian-free optimization, but replaces the approximate Hessian-vector product by an approximate Fisher information matrix-vector product. It is used to train a DBM, faster than the baseline algorithm in terms of epochs needed, but at the cost of a computational slowdown (about a factor 30). The paper is well-written, the algorithm is novel, although not fundamentally so.\\n\\nIn terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick. Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better).\", \"other_comments\": [\"Remove the phrase 'first principles', it is not applicable here.\", \"Is there a good reason to limit section 2.1 to a discrete and bounded domain X?\", \"I'm not a big fan of the naming a method whose essential ingredient is a metric 'Metric-free' (I know Martens did the same, but it's even less appropriate here).\", \"I doubt the derivation in appendix 5.1 is a new result, could be omitted.\", \"Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported.\", \"Results should be averaged over multiple runs, and error-bars given.\", \"The authors could clarify how the algorithm complexity scales with problem dimension, and where the computational bottleneck lies, to help the reader judge its promise beyond the current results.\", \"A pity that it took longer than 6 weeks for the promised 'next revision', I had hoped the authors might resolve some of the self-identified weaknesses in the meanwhile.\"]}",
"{\"review\": \"Thank you to the reviewers for the helpful feedback. The provided references will no doubt come in handy for future work.\", \"to_all_reviewers\": \"In an effort to speedup run time, we have re-implemented a significant portion of the MFNG algorithm. This resulted in large speedups for the diagonal approximation of MFNG, and all around lower memory consumption. Unfortunately, this has delayed the submission of a new manuscript, which is still under preparation. The focus of this new revision will be on:\\n(1) reporting mean and standard deviations of Fig.1 across multiple seeds.\\n(2) a more careful use of damping and the use of annealed learning rates.\\n(3) results on a second dataset, and hopefully a second model family (Gaussian RBMs).\\n\\nIn the meantime, we have uploaded a new version which aims to clarify and provide additional technical details, where the reviewers had found it necessary. The main modifications are:\\n* a new algorithmic description of MFNG\\n* a new graph which analyzes runtime performance of the algorithm, breaking down the run-time performance between the various steps of the algorithm (sampling, gradient computation, matrix-vector product, and MinRes iterations).\\nThe paper should appear shortly on arXiv, and can be accessed here in the meantime:\", \"http\": \"//brainlogging.files.wordpress.com/2013/03/iclr2013_submission1.pdf\\n\\nAn open-source implementation of MFNG can be accessed at the following URL.\", \"https\": \"//github.com/gdesjardins/MFNG.git\", \"to_anonymous_7e2e\": \"There are numerous advantages to sampling from parallel chains (with fewer Gibbs steps between samples), compared to using consecutive (or sub-sampled) samples generated by a single Markov chain. First, running multiple chains guarantees that the samples are independent. Running a single chain will no doubt result in correlated samples which will negatively impact our estimates of the gradient and the metric. Second, simulating multiple chains is an implicitly parallel process, which can be implemented efficiently on both CPU and GPU (especially so on GPU). The downside however is in increase in memory consumption.\", \"to_anonymous_77a7\": \">> In terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick.\\n\\nSince ours is a natural gradient method, it attenuates the effect of ill-conditioned probability manifolds (expected hessian of log Z, under the model distribution), not ill-conditioning of the expected hessian (under the empirical distribution). It is thus possible that centering addresses the latter form of ill-conditioning. Another hypothesis is that centering provides a better initialization point, around which the natural gradient metric is better-conditioned and thus easier to invert. More experiments are required to answer these questions.\\n\\n>> Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better).\\n\\nOur source code and algorithmic description should shed some light on this issue. The only 'trick' we currently use is a fixed damping coefficient along the diagonal, to improve conditioning and speed up convergence of our solver. Alternative forms of initialization and preconditioning were not used in the experiments.\\n\\n>> Is there a good reason to limit section 2.1 to a discrete and bounded domain chi?\\n\\nThese limitations mostly reflect our interest with Boltzmann Machines. Generalizing these results to unbounded domains (or continuous variables) remains to be investigated.\\n\\n>> Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported.\\n\\nThe results of our grid-search have been added to the caption of Figure 1.\"}"
]
} |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | conferencePoster-iclr2013-workshop | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"d6u7vbCNJV6Q8",
"Xu4KaWxqIDurf",
"00ZvUXp_e10_E",
"iiUe8HAsepist",
"EEhwkCLtAuko7",
"o1YP1AMjPx1jv",
"XTZrXGh8rENYB",
"Za8LX-xwgqXw5",
"3vEUvBbCrO8cu"
],
"note_type": [
"review",
"review",
"comment",
"comment",
"review",
"comment",
"comment",
"review",
"review"
],
"note_created": [
1361968020000,
1363393200000,
1363392660000,
1363392180000,
1362405300000,
1363393020000,
1363393320000,
1362498780000,
1363392960000
],
"note_signatures": [
[
"anonymous reviewer ac47"
],
[
"Rakesh Chalasani, Jose C. Principe"
],
[
"Rakesh Chalasani, Jose C. Principe"
],
[
"Rakesh Chalasani, Jose C. Principe"
],
[
"anonymous reviewer 62ac"
],
[
"Rakesh Chalasani, Jose C. Principe"
],
[
"Rakesh Chalasani"
],
[
"anonymous reviewer 1829"
],
[
"Rakesh Chalasani, Jose C. Principe"
]
],
"structured_content_str": [
"{\"title\": \"review of Deep Predictive Coding Networks\", \"review\": \"Deep predictive coding networks\\n\\nThis paper introduces a new model which combines bottom-up, top-down, and temporal information to learning a generative model in an unsupervised fashion on videos. The model is formulated in terms of states, which carry temporal consistency information between time steps, and causes which are the latent variables inferred from the input image that attempt to explain what is in the image.\", \"pros\": \"Somewhat interesting filters are learned in the second layer of the model, though these have been shown in prior work.\\n\\nNoise reduction on the toy images seems reasonable.\", \"cons\": \"The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.\\n\\nThe noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity. The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.\\n\\nFigure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.\\n\\nIt is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.\\n\\nWhere did z come from in equation 11?\\n\\nIt is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.\", \"novelty_and_quality\": \"The paper introduces a novel extension to hierarchical sparse coding method by incorporating temporal information at each layer of the model. The poor explanation of this relatively simple idea holds the paper back slightly.\"}",
"{\"review\": \"The revised paper is uploaded onto arXiv. It will be announced on 18th March.\\n\\nIn the mean time, the paper is also made available at\", \"https\": \"//www.dropbox.com/s/klmpu482q6nt1ws/DPCN.pdf\"}",
"{\"reply\": \"Thank you for you review and comments, particularly for pointing out some mistakes in the paper. Following is our response to some concerns you have raised.\\n\\n>>> 'You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?'\\n\\nWe use the generalized state-space equations in Eq.1 and Eq.2 to motivate the relation between the proposed model and dynamic networks. However, please note that it is difficult to state the explicit form of F and G, since sparsity constraint even on a linear dynamical system leads to a non-linear mapping between the observations and the states.\\n\\n>>> 'In Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.'\\n\\nAgreed. We made appropriate changes in the revised paper.\\n\\n>>> The relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t)\\n\\nWe made this explicit in the revised paper. Eq.3 represents the energy function for inferring the x_t with fixed u_t and Eq.4 represents the energy function for inferring the u_t with fixed x_t. In order to be more clear, we now wrote a unified energy function (Eq. 5) from which we jointly infer both x_t and u_t. \\n\\n>>> 'Section 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.'\\n\\nAgreed, this statement is incorrect and is removed.\\n\\n>>> 'Section 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?'\\n\\nPlease note that the states from the 4 overlapping patches are pooled using a non-linear function (sum of the absolute value of the state vectors). Hence, the output is no longer a linear mapping.\\n\\n>>> 'Section 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?'\\n\\nNo, we simply consider the concatenated sequence as a single video. This is made more clear in the paper.\"}",
"{\"reply\": \"Thank you for your review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.\\n\\n>>> 'The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.'\\n\\nWe have made major changes to the paper to improve the presentation of the model. Hopefully the newer version will make the explanation more clear.\", \"we_would_also_like_to_emphasis_that__the_paper_makes_two_important_contributions\": \"(1) as you have pointed out, introduces sparse coding in dynamical models and solves it using a novel inference procedure similar to ISTA. (2) considers top-down information while performing inference in the hierarchical model.\\n\\n>>> 'The noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity.'\\n\\nWe agree with you that it would strengthen our arguments by showing denoising on large images or videos. However, to scale this model to large images require convolutional network like model. This is an on going work and we are presently developing a convolutional model for DPCN.\\n\\n>>> 'The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.'\\n\\nCorrected. The points between the clusters appear because we enforce temporal coherence on the causes belonging two consecutive frames at the top layer (see Section 2.4). It is not due to gradual change in the sequences, as said previously.\\n\\n>>> 'Figure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.'\\n\\nCorrected. Also, a new figure is added to bring more clarity.\\n\\n>>> 'It is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.'\\n\\nPlease see the supplementary material, section A.4 for visualization of the first layer parameters A, B and C. Also, please note that the Figure. 2 shows the visualization of the invariant matrices, B, in a two-layered network. These are obtained by taking the linear combination of Gabor like filters in C^(1) (see Figure .6) and hence, represent more complex structures. This is made more clear in the paper.\\n\\n>>> 'Where did z come from in equation 11?'\\n\\nCorrected. It is the Gaussian transition noise over the parameters.\\n\\n>>> 'It is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.'\\n\\nWe say the states are more temporally 'consistent' to indicate that they are more stable than sparse coding, particularly in high sparsity conditions, because they have to maintain the temporal dependencies. On the other hand, we agree with you that the causes are more invariant to changes in the input and hence, are temporally 'coherent'.\"}",
"{\"title\": \"review of Deep Predictive Coding Networks\", \"review\": \"This paper attempts to capture both the temporal dynamics of signals and the contribution of top down connections for inference using a deep model. The experimental results are qualitatively encouraging, and the model structure seems like a sensible direction to pursue. I like the connection to dynamical systems. The mathematical presentation is disorganized though, and it would have been nice to see some sort of benchmark or externally meaningful quantitative comparison in the experimental results.\", \"more_specific_comments\": \"You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?\\n\\nIn Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.\\n\\nThe relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t)\\n\\nSection 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.\\n\\nSection 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?\\n\\nSection 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?\"}",
"{\"reply\": \"Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.\\n\\n>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model' \\n\\nWe made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper. \\n\\n>>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.'\\n\\nWe agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN. \\n\\n>>>'In the beginning of the section 2.1, please define P, D, K to improve clarity. \\n>>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable. \\n>>> What is z_{t} in Equation 11?' \\n\\nCorrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters. \\n\\n>>> 'In Section 2.2, its not clear how u_{hat} is computed. ' \\n\\nThis is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}.\"}",
"{\"reply\": \"This is in reply to reviewer 1829, mistakenly pasted here. Please ignore.\"}",
"{\"title\": \"review of Deep Predictive Coding Networks\", \"review\": \"A brief summary of the paper's contributions, in the context of prior work.\\nThe paper proposes a hierarchical sparse generative model in the context of a dynamical system. The model can capture temporal dependencies in time-varying data, and top-down information (from high-level contextual/causal units) can modulate the states and observations in lower layers. \\n\\nExperiments were conducted on a natural video dataset, and on a synthetic video dataset with moving geometric shapes. On the natural video dataset, the learned receptive fields represent edge detectors in the first layer, and higher-level concepts such as corners and junctions in the second layer. In the synthetic sequence dataset, hierarchical top-down inference is used to robustly infer about \\u201ccausal\\u201d units associated with object shapes.\\n\\n\\nAn assessment of novelty and quality.\\nThis work can be viewed as a novel extension of hierarchical sparse coding to temporal data. Specifically, it is interesting to see how to incorporate dynamical systems into sparse hierarchical models (that alternate between state units and causal units), and how the model can perform bottom-up/top-down inference. The use of Nestrov\\u2019s method to approximate the non-smooth state transition terms in equation 5 is interesting.\\n\\nThe clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model (also, see comments below). \\n\\nThe experimental results (identifying high-level causes from corrupted temporal data) seem quite reasonable on the synthetic dataset. However, the results are all too qualitative. The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.\", \"other_questions_and_comments\": [\"In the beginning of the section 2.1, please define P, D, K to improve clarity.\", \"In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.\", \"What is z_{t} in Equation 11?\", \"In Section 2.2, it\\u2019s not clear how u_hat is computed.\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": [\"The formulation and the proposed solution are technically interesting.\", \"Experimental results on a synthetic video data set provide a proof-of-concept demonstration.\"], \"cons\": [\"The significance of the experiments is quite limited. There is no empirical comparison to other models on real tasks.\", \"Inference seems to be complicated and computationally expensive.\", \"Unclear presentation\"]}",
"{\"review\": \"Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.\\n\\n>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model'\\n\\nWe made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper.\\n\\n>>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.'\\n\\nWe agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN. \\n\\n>>>'In the beginning of the section 2.1, please define P, D, K to improve clarity. \\n>>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.\\n>>> What is z_{t} in Equation 11?'\\n\\nCorrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters. \\n\\n>>> 'In Section 2.2, its not clear how u_{hat} is computed. '\\n\\nThis is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}.\"}"
]
} |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods. | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | conferencePoster-iclr2013-workshop | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"UUlHmZjBOIUBb",
"tt7CtuzeCYt5H",
"qqdsq7GUspqD2",
"DNKnDqeVJmgPF"
],
"note_type": [
"review",
"comment",
"comment",
"review"
],
"note_created": [
1362353160000,
1363857240000,
1363857540000,
1360866060000
],
"note_signatures": [
[
"anonymous reviewer d966"
],
[
"YoonSeop Kang"
],
[
"YoonSeop Kang"
],
[
"anonymous reviewer 0e7e"
]
],
"structured_content_str": [
"{\"title\": \"review of Learning Features with Structure-Adapting Multi-view Exponential Family\\n Harmoniums\", \"review\": \"The paper introduces an new algorithm for simultaneously learning a hidden layer (latent representation) for multiple data views as well as automatically segmenting that hidden layer into shared and view-specific nodes. It builds on the previous multi-view harmonium (MVH) algorithm by adding (sigmoidal) switch parameters that turn a connection on or off between a view and hidden node and uses gradient descent to learn those switch parameters. The optimization is similar to MVH, with a slight modification on the joint distribution between views and hidden nodes, resulting in a change in the gradients for all parameters and a new switch variable to descend on.\\n\\nThis new algorithm, therefore, is somewhat novel; the quality of the explanation and writing is high; and the experimental quality is reasonable.\\n\\nPros\\n\\n1. The paper is well-written and organized.\\n\\n2. The algorithm in the paper proposes a way to avoid hand designing shared and private (view-specific) nodes, which is an important contribution.\\n\\n3. The experimental results indicate some interesting properties of the algorithm, in particular demonstrating that the algorithm extracts reasonable shared and view-specific hidden nodes.\\n\\nCons\\n1. The descent directions have W and the switch parameters, s_kj, coupled, which might make learning slow. Experimental results should indicate computation time.\\n\\n2. The results do not have error bars (in Table 1), so it is unclear if they are statistically significant (the small difference suggests that they may not be).\\n\\n3. The motivation in this paper is to enable learning of the private and shared representations automatically. However, DWH (only a shared representation) actually seems to perform generally better that MVH (shared and private). The experiments should better explore this question. It might also be a good idea to have a baseline comparison with CCA. \\n\\n4. In light of Con (3), the algorithm should also be compared to multi-view algorithms that learn only shared representations but do not require the size of the hidden-node set to be fixed (such as the recent relaxed-rank convex multi-view approach in 'Convex Multiview Subspace Learning', M. White, Y. Yu, X. Zhang and D. Schuurmans, NIPS 2012). In this case, the relaxed-rank regularizer does not fix the size of the hidden node set, but regularizes to set several hidden nodes to zero. This is similar to the approach proposed in this paper where a node is not used if the sigmoid value is < 0.5. \\nNote that these relaxed-rank approaches do not explicitly maximize the likelihood for an exponential family distribution; instead, they allow general Bregman divergences which have been shown to have a one-to-one correspondence with exponential family distributions (see 'Clustering with Bregman divergences' A. Banerjee, S. Merugu, I. Dhillon and J. Ghosh, JMLR 2005). Therefore, by selecting a certain Bregman divergence, the approach in this paper can be compared to the relaxed-rank approaches.\"}",
"{\"reply\": \"1. The distribution of sigma(s_{kj}) had modes near 0 and 1, but the graph of the distribution was omitted due to the space constraints. The amount of separation between modes were affected by the hyperparameters that were not mentioned in the paper.\\n\\n2. It is true that the separation between digit features and noises in our model is not perfect. But it is also true that view-specific features contain more noisy features than the shared ones. \\nWe appreciate your suggestions about the additional experiments about de-noising digits, and we will present the result of the experiments if we get a chance.\"}",
"{\"reply\": \"1. As the switch parameters converge quickly, the training time of our model was not very different from that of DWH.\\n2. We performed the experiment several times, but the result was consistent. Still, it is our fault that we didn't repeat the experiments enough to add error bars to the results.\\n3. MVHs are often outperformed by DWHs unless the sizes of latent node sets are not carefully chosen, and this is one of the most important reason for introducing switch parameters. To make our motivation clear, we assigned 50% of hidden nodes as shared, and evenly assigned the rest of hidden nodes as visible nodes for view-specific nodes of each view. We didn't compare our method to CCA, because we thought DWH would be a better example of models with only a shared representation.\\n4. We were not aware of the White et al.'s work when we submitted our work, and therefore couldn't make comparison with their model.\"}",
"{\"title\": \"review of Learning Features with Structure-Adapting Multi-view Exponential Family\\n Harmoniums\", \"review\": \"The authors propose a bipartite, undirected graphical model for multiview learning, called structure-adapting multiview harmonimum (SA-MVH). The model is based on their earlier model called multiview harmonium (MVH) (Kang&Choi, 2011) where hidden units were separated into a shared set and view-specific sets. Unlike MVH which explicitly restricts edges, the visible and hidden units in the proposed SA-MVH are fully connected to each other with switch parameters s_{kj} indicating how likely the j-th hidden unit corresponds to the k-th view.\\n\\nIt would have been better if the distribution of s_{kj}'s (or sigma(s_{kj})) was provided. Unless the distribution has clear modes near 0 and 1, it would be difficult to tell why this approach of learning w^{(k)}_{ij} and s_{kj} separately is better than just learning \\tilde{w}^{(k)}_{ij} = w^{(k)}_{ij} sigma s_{kj} all together (as in dual-wing harmonium, DWH). Though, the empirical results (experiment 2) show that the features extracted by SA-MVH outperform both MVH and DWH.\\n\\nThe visualizations of shared and view-specific features from the first experiment do not seem to clearly show the power of the proposed method. For instance, it's difficult to say that the filters of roman digits from the shared features do seem to have horizontal noise. It would be better to try some other tasks with the trained model. Would it be possible to sample clean digits (without horizontal or vertical noise) from the model if the view-speific features were forced off? Would it be possible to denoise the corrupted digits? and so on..\", \"typo\": [\"Fig. 1 (c): sigma(s_{1j}) and sigma(s_{2j})\"]}"
]
} |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting. | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | conferencePoster-iclr2013-conference | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"D1cO7TgVjPGT9",
"pRFvp6BDvn46c",
"iGfW_jMjFAoZQ",
"c2pVc0PtwzcEK"
],
"note_type": [
"review",
"review",
"review",
"review"
],
"note_created": [
1361300640000,
1362491220000,
1362428640000,
1364253000000
],
"note_signatures": [
[
"anonymous reviewer 71f4"
],
[
"anonymous reviewer 61c0"
],
[
"anonymous reviewer 18ca"
],
[
"Nicolas Le Roux, Francis Bach"
]
],
"structured_content_str": [
"{\"title\": \"review of Local Component Analysis\", \"review\": \"In this paper, the authors consider unsupervised metric learning as a\\ndensity estimation problem with a Parzen windows estimator based on \\nEuclidean metric. They use maximum likelihood method and EM algorithm\\nfor deriving a method that may be considered as an unsupervised counterpart to neighbourhood component analysis. Various versions of the method provide good results in the clustering problems considered.\\n\\n+ Good and interesting conference paper.\\n+ Certainly novel enough.\\n- Modifications are needed to combat the problems of overfitting,\\nlocal minima, and computational load in the basic approach proposed.\\nSome of these improvements are heuristic or seem to require hand-tuning.\", \"specific_comments\": \"- The authors should refer to the paper S. Kaski and J. Peltonen,\\n'Informative discriminant analysis', in T. Fawcett and N. Mishna (Eds.),\\nProc. of the 20th Int. Conf. on Machine Learning (ICML 2003), pp. 329-336,\\nAAAI Press, Menlo Park, CA, 2003.\\nIn this paper, essentially the same technique as Neighbourhood Component\\nAnalysis is defined under the name Informative discriminant analysis\\none year prior to the paper by Goldberger et al., your reference [16].\\n\\n- In the beginning of page 6 the authors state: 'Following [1, 2], the data\\nis progressively corrupted by adding dimensions of white Gaussian noise,\\nthen whitened.' In this case, whitening amplifies Gaussian noise, so that\\nit has the same power as the underlying data. Obviously this is the reason\\nwhy the experimental results approach to a random guess when the dimensions of the white noise increase sufficiently. The authors should mention that in real-world applications, one should not use whitening in this kind of situations, but rather compress the data using for example principal component analysis (PCA) without whitening for getting rid of the extra dimensions corresponding to white Gaussian noise. Or at least use the data as such without any whitening.\"}",
"{\"title\": \"review of Local Component Analysis\", \"review\": \"Summary of contributions:\\nThe paper presents a robust algorithm for density estimation. The main idea is to model the density into a product of two independent distributions: one from a Parzen windows estimation (for modeling a low dimensional manifold) and the other from a Gaussian distribution (for modeling noise). Specifically, leave-one-out log-likelihood is used as the objective function of Parzen window estimator, and the joint model can be optimized using Expectation Maximization algorithm. In addition, the paper presents an analytical solution for M-step using eigen-decomposition. The authors also propose several heuristics to address local optima problems and to improve computational efficiency. The experimental results on synthetic data show that the proposed algorithm is indeed robust to noise.\", \"assessment_on_novelty_and_quality\": \"\", \"novelty\": \"This paper seems to be novel. The main ideas (using leave-one-out log-likelihood and decomposing the density as a product of Parzen windows estimator and a Gaussian distribution) are very interesting.\", \"quality\": \"The paper is clearly written. The method is well motivated, and the technical solutions are quite elegant and clearly described. The paper also presents important practical tips on addressing local optima problems and speeding up the algorithm. \\n\\nIn experiments, the proposed algorithm works well when noise dimensions increase in the data. The experiments are reasonably convincing, but they are limited to very low-dimensional, toy data. Evaluation on more real-world datasets would have been much more compelling. Without such evaluation, it\\u2019s unclear how the proposed method will perform on real data.\\n\\nAlthough interesting, the assumption about modeling the data density as a product of two independent distributions can be too strong and unrealistic. For example, how can this model handle the cases when noise are added to the low-dimensional manifold, not as orthogonal \\u201cnoise dimension\\u201d?\", \"other_comments\": [\"Figure 1 is not very interesting since even NCA will learn near-isotropic covariance, and the baseline method seems to be PCA whitening, not PCA.\"], \"pros_and_cons\": \"\", \"pros\": [\"The paper seems sufficiently novel.\", \"The main approach and solution are technically interesting.\", \"The experiments show proof-of-concept (albeit limited) demonstration that the proposed method is robust to noise dimensions (or irrelevant features).\"], \"cons\": [\"The experiments are limited to very low-dimensional, toy datasets. Evaluation on more real-world datasets would have been much more compelling. Without such evaluation, it\\u2019s unclear how the proposed method will perform on real data.\", \"The assumption about modeling the data density as a product of two independent distributions can be too strong and unrealistic (see comments above).\"]}",
"{\"title\": \"review of Local Component Analysis\", \"review\": \"Summary of contributions:\\n1. The paper proposed an unsupervised local component analysis (LCA) framework that estimates the Parzen window covariance via maximizing the leave-one-out density. The basic algorithm is an EM procedure with closed form updates. \\n\\n2. One further extension of LCA was introduced, which assumes two multiplicative densities, one is Parzen window (non Gaussian) and the other is a global Gaussian distribution. \\n\\n3. Algorithms was designed to scale up the algorithms to large data sets.\", \"assessment_of_novelty_and_quality\": \"The work looks quite reasonable. But the approach seems to be a bit straightforward. The work is perhaps not very deep or inspiring. \\n\\nMy major concern is, other than the described problem setting being tackled, mostly toy problems, I don't see the significance of the work for addressing major machine learning challenges. For example, the authors argued the approach might be a good preprocessing step, but in the experiments, there is nothing like improving machine learning (e.g. classification) via such a pre-processing of data. \\n\\nIt's disappointing to see that the authors didn't study the identifiability of the Parzen/Gaussian model. Addressing this issue should have been a good chance to show some depth of the research.\"}",
"{\"review\": \"First, we would like to thank the reviewers for their comments.\\n\\nThe main complaint was that the experiments were limited to toy problems. Since it is always hard to evaluate unsupervised learning algorithms (what is the metric of performance), the experiments were designed as a proof of concept. Hence, we agree with the reviewers and would love to see LCA tried and evaluated on real problems.\\n\\nFor the comment about the required modifications to avoid overfitting, there is truly only one parameter to set, i.e., the lambda parameter. All the others can easily be set to default values.\"}"
]
} |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero. | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | conferencePoster-iclr2013-workshop | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"cAqVvWr0KLv0U",
"og9azR3sTxoul",
"Id_EI3kn5mX4i",
"8PUQYHnMEx8CL"
],
"note_type": [
"review",
"review",
"review",
"review"
],
"note_created": [
1362183240000,
1362399720000,
1362387060000,
1363039740000
],
"note_signatures": [
[
"anonymous reviewer 1567"
],
[
"anonymous reviewer b670"
],
[
"anonymous reviewer c3d4"
],
[
"Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun"
]
],
"structured_content_str": [
"{\"title\": \"review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities\", \"review\": \"In [10], the authors had previously proposed modifying the network\\nparametrization, in order to ensure zero-mean hidden unit activations across training examples (activity centering) and zero-mean derivatives (slope centering). This was achieved by introducing skip-connections between layers l-1 and l+1 and adding linear components to the non-linearity of layer l: these new parameters aren't learnt however, but instead are adjusted deterministically to enforce activity and slope centering. These ideas had initially been proposed by Schraudolph in earlier work, with [10] showing that these tricks significantly improved convergence of deep networks while also making the connection to second order methods.\\n\\nIn this work, the authors proposed adding an extra scaling parameter to the non-linearity, which is adjusted in order to make the digonal terms of the Hessian / Fisher Information matrix closer to unity. The authors study the effect of these 3 transformations by: \\n(1) measuring properties of the Hessian matrix with and without transformations, as well as angular distance of the resulting gradients to 2nd order gradients;\\n(2) comparing the overall classification convergence speed for a 2 and 3 layer MLPs on MNIST and finally;\\n(3) studying its effect on a deep auto-encoder.\\n\\nWhile I find this research direction particularly interesting, I find the \\noverlap between this paper and [10] to be rather troubling. While their analysis of slope / activity centering is new (and a more direct test of their \\nhypothesis), I feel that the case for these transformations had already been\\nmade in [10]. More importantly, evidence for the 3rd transformation is rather weak: it seems to slightly help convergence of 3-layer models and also helps in making the diagonal elements of the Hessian more unimodal. However, including gamma seem to rotate gradients *away* from 2nd order gradients. Also, their method did not seem to help in the deep auto-encoder setting: using gamma in the encoder network did not improve convergence speed, while using gamma in both encoders/decoders led to gamma either blowing-up or going to zero. While you would expect a diagonal approximation to a second-order method to help with the problem of dead-units, adding gamma did not seem to help in this respect. \\n\\nSimilarities between this paper and [10] are also evident in the writing itself. Large portions of Sections 1, 2 and 3 appear verbatim in [10]. This needs to be addressed prior to publication. The math of Section 3 could also be simplified by writing out gradients of log p (for each parameter \\theta) and then simply stating the general form of the FIM as E_eps[ dlogp/dtheta^T dlogp / dtheta]. As it stands Eqs. (12-17) are slightly inaccurate, as elements of the FIM should include an expectation over epsilon.\", \"summary\": \"I find the direction promising but the conclusion to be somewhat confusing / disappointing. The premise for gamma seemed well motivated and I expected more concrete evidence explaining the need for this transformation. Unfortunately, I am left wondering where things went wrong: some missing theoretical insight, wrong update rule on gamma or other ?\", \"other\": [\"Authors should consider using df/dx instead of the more ambiguous f' notation.\", \"Could the authors clarify what they mean by: 'transforming the model instead of the gradient makes it easier to generalize to other contexts such as variational Bayes ?' One downside I see to transforming the model instead of the gradients is that it obfuscates the link to second order methods and might thus hide useful insights.\", \"Section 4: 'algorith' -> algorithm\"]}",
"{\"title\": \"review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities\", \"review\": \"This paper builds on previous work by the same authors that looks at performing dynamic reparameterizations of neural networks to improve training efficiency. The previously published approach is augmented with an additional parameter (gamma) which, although it is argued should help in theory, doesn't seem to in practice. Theoretical arguments for why the standard gradient computed under this reparameterization will be closer to a 2nd-order update are made, and experiments are conducted. While the theoretical arguments are pretty weak in my opinion (see detailed comments below), the experiments that looks at eigenvalues of the Hessian are somewhat more convincing, although they indicate that the originally published approach, without the gamma modification, is doing a better job.\", \"pros\": [\"reasonably well written\", \"experiments looking at eigenvalue distributions are interesting\"], \"cons\": \"- actual method is similar to authors' previous work in [10] and the older method of Schraudolph [12]\\n- the new modification doesn't seem to improve training efficiency, and even makes the eigenvalue distribution worse\\n- there seem to be problems with the theoretical analysis (maybe the authors can address this in their response?)\\n\\n\\n///// Detailed comments \\\\\\\\\\n\\nBecause it sounds similar to what you're doing, I think it would be helpful to give a slightly more detailed description of Schraudolph's 'gradient factor centering'. Does it correspond exactly to what you are doing in the case of neural nets? And if so, could you give an interesting example of how to apply your method to other models where Schraudolph's method would no longer apply? \\n\\nI don't understand what you mean by 'many competing paths' at the bottom of page 2. \\n\\nAnd when talking about 'linear dependencies' from x to y, what exactly do you mean? Do you mean the 1st-order components of the Taylor series of the true mapping or something else? Also, you might want to use affine when discussing functions that are linear + constant to be more technically precise.\\n\\nCan the arguments in section 3 be applied to network with more than 1 hidden layer?\\n\\nA concern I have with the analysis in section 3 is that, while assuming uncorrelated hidden unit outputs might be somewhat sensible (although I feel that our intuitions about how neural networks model certain mappings - such as 'representing different things' may be inaccurate), it seems less reasonable to assume that inputs (x) are uncorrelated with the outputs of the units, which seems to be needed to show that off-diagonal terms are zero (other than for eqn 12). You also seem to assume that certain 1st-derivatives of unit outputs are uncorrelated with various quantities (inputs, other unit outputs, and unit derivatives), which I don't think follows from the assumptions about the outputs of the units being uncorrelated with each other (but if this is indeed true, you should prove it or provide a reference). I think you should apply more rigor to these arguments for them to be convincing.\\n\\nI would recommend using an exact method to compute the Hessian. For example, you can compute it using n matrix-vector products, and tools for computing these automatically for any computational graph are widely available, as are particular formulae for neural networks. Such a method would be no more costly than what you are doing now, which involves n gradient computations.\\n\\nThe discussion surrounding equation 19 is an somewhat inaccurate and oversimplified account of the role that a constant like mu has in a second-order update rule like eqn. 19. This is a well studied and highly complex problem which doesn't really have to do with issues surrounding the inversion of the Hessian 'blowing up' so much as the problems of break-downs in model trust that occur when computing proposals based on local quadratic models of the objective. \\n\\nYour experiments seem to suggest that the eigenvalues are more even when you leave out the gamma parameter. How do you reconcile this with your theoretical analysis?\\n\\nWhy do you show a histogram of diagonal elements as opposed to eigenvalues in figure 2? I would argue that the concentration of the eigenvalues is a much better indicator of how close the Hessian matrix is to the identity (and hence how close the gradient is to being the same as a 2nd-order update) than what the diagonal entries look like. The diagonal entries of a highly non-diagonal matrix aren't particularly meaningful to look at.\\n\\nAlso, since your analysis was done using the Fisher, why not examine this matrix instead of the Hessian in your experiments?\"}",
"{\"title\": \"review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities\", \"review\": \"* A brief summary of the paper's contributions, in the context of prior work.\\n\\nThis paper extends the authors' previous work on making sure that the hidden units in a neural net have zero output and slope on average, by also using direct connections that model explicitly the linear dependencies. The extension introduces another transformation which changes the scale of the outputs of the hidden units: essentially, they try to normalize both the scale and the slope of the outputs to one. This is done (essentially) by introducing a regularization parameter that encourages the geometric mean of the scale and the slope to be one.\\n\\nThe paper's contributions are also to give a theoretical analysis of the effect of the proposed transformations. The already proposed tricks are shown to make the non-diagonal elements of the Fisher information matrix closer to zero. The new transformation makes the diagonal elements closer to each other in scale, which is interesting as it's similar to what natural gradient does.\\n\\nThe authors also provide an empirical analysis of how the proposed method is close to what a second-order method would do (albeit on a small neural net). The experiment with the angle between the gradient and the second-order update is quite nice (I think such an experiment should be part of any paper that proposes new optimization tricks for training neural nets).\\n\\n* An assessment of novelty and quality.\\n\\nGenerally, this is a well-written and clear paper that extends naturally the authors' previous work. I think that the analysis is interesting and quite readable. I don't think that these particular transformations have been considered before in the literature and I like that they are not simply fixed transformations of the data, but something which integrates naturally into the learning algorithm.\\n\\n* A list of pros and cons (reasons to accept/reject).\\n\\nThe proposed scaling transformation makes sense in theory, but I'm not sure I agree with the authors' statement (end of Section 5) that the method's complexity is 'minimal regularization' compared to dropouts (maybe in theory, but honestly implementing dropout in a neural net learning system is considerably easier). The paper also doesn't show significant improvements (beyond analytical ones) over the previous transformations; based on the empirical results only I wouldn't necessarily use the scaling transformation.\"}",
"{\"review\": \"First of all we would like to thank you for your informed, thorough and kind comments. We realize that there is major overlap with our previous paper [10]. We hope that these two papers could be combined in a journal paper later on. It was mentioned that we use some text verbatim from [10]. There is some basic methodology which is necessary to explain before going to deeper explanations and we felt that it is not a big violation to use our own text. However, we have now modified the sections in question with your comments and proposals in mind. If you feel that it is necessary to check every sentence for verbatim, please consider conditional acceptance with this condition.\\n\\nWe agree that the evidence supporting the use of the third transformation is rather weak. We have tried to report our findings as honestly as possible and also express our doubts in the paper (see, e.g., end of Section 4).\\n\\nTo reviewer 'Anonymous 1567':\\n\\nYou argue that Eqs. (12-17) are slightly accurate. However, we have computed the expectation over epsilon with pen and paper and epsilon does vanish from the Eqs. Thus, the Eqs. in question are exact. Would you think that we should still write down the gradients explicitly?\\n\\nWe had considered using the df/dx notation, but decided to use the f' notation, since the derivative is taken with respect to Bx and using the df/dx notation would require us to define a new variable u = Bx and denote df/du. We think, this would further clutter the equations. Would you think this is acceptable?\\n\\nWe tried to clarify the meaning of 'transforming the model instead of the gradient ...' in Discussion.\\n\\nTo reviewer 'Anonymous b670':\\n\\nWe have now explained the relationship to Schraudolph's method in more detail. We provide an example and refer to Discussion of [10].\\n\\nWhen writing about 'many competing paths' and 'linear dependencies', we have added explanations with equations in the updated version.\\n\\nThe question, whether the arguments in Section 3 can be applied to networks with more than one hidden layer: We have presented the theory with this simplified case in order to convey the understanding to the reader. We assume that the idea could be formulated in the general (deep) case, but writing it out would substantially complicate the equations. Our experimental results support this assumption.\\n\\nAbout the uncorrelatedness assumption, we have added the following explanation: 'Naturally, it is unrealistic to assume that inputs $x_t$, nonlinear activations $f(cdot)$, and their slopes $f^prime(cdot)$ are all uncorrelated, so the goodness of this approximation is empirically evaluated in the next section.'\\n\\nWe do realize that it is possible and more elegant to compute exact solution for the Hessian matrix. However, as being more error prone, it would require careful checking by, e.g., some approximative method. As the mere approximation suits our needs well, we refrained from doing the extra work for the exact solution. We have also acknowledged this in the paper.\\n\\nRegarding mu in Eq. 19: Thanks for this remark. We have reformulated the text surrounding Eq. 19. Could you kindly provide further suggestions and/or references if you still find it unsatisfactory.\", \"experiments_on_eigenvalue_distribution\": \"Fig. 1(a) suggest that there is no clear difference between the eigenvalue distributions with our without gamma (the vertical position of the plot is irrelevant since it corresponds to choosing a different learning rate).\\n\\nWe show histogram of diagonal elements in order to distinguish between weights. For instance, the colors in Fig.2 could not have been used otherwise.\\n\\nFisher Information vs. Hessian matrix: This is a relevant point for the future work. The Hessian describes the curvature of the actual optimization problem. We chose Fisher information matrix in the theoretical part simply because it has more compact equations. As we note in the paper, 'the hessian matrix is closely related to the Fisher information matrix, but it does depend on the output data and contains more term'. We argue that the terms present in Fisher information matrix will make our point clear and adding the other terms included in the Hessian would just be additional clutter.\\n\\nTommi Vatanen and Tapani Raiko\"}"
]
} |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements. | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | conferencePoster-iclr2013-workshop | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | ICLR.cc/2013/conference | 2013 | {
"note_id": [
"boGLoNdiUmbgV",
"dPNqPnWus1JhM",
"vvzH6kFyntmsR",
"dYGvTnylo5TlF",
"FdwnFIZNOxF5S"
],
"note_type": [
"review",
"review",
"comment",
"review",
"review"
],
"note_created": [
1362582360000,
1362219240000,
1364258160000,
1361559180000,
1363384620000
],
"note_signatures": [
[
"anonymous reviewer 51ff"
],
[
"anonymous reviewer 6c04"
],
[
"anonymous reviewer 6c04"
],
[
"anonymous reviewer 91ea"
],
[
"Guido F. Montufar, Jason Morton"
]
],
"structured_content_str": [
"{\"title\": \"review of When Does a Mixture of Products Contain a Product of Mixtures?\", \"review\": \"This paper attempts at comparing mixture of factorial distributions (called product distributions) to RBMs. It does so by analyzing several theoretical properties, such as the smallest models which can represent any distribution with a given number of strong modes (or at least one of these distributions) or the smallest mixture which can represent all the distributions of a given RBM.\\n\\nThe relationship between RBMs and other models using hidden states is not fully understood and any clarification is welcome. Unfortunately, not only I am not sure the MoP is the most interesting class of models to analyze, but the theorems focus on extremely specific properties which severely limits their usefulness:\\n- the definition of strong modes makes the proofs easier but it is hard to understand how they relate to 'interesting' distributions. I understand this is a very vague notion but I would have appreciated hints about how the distributions we care about tend to have a high number of strong modes.\\n- the fact that there are exponentially many inference regions for an RBM whereas there are only a linear number of them for a MoP seems quite obvious, merely by counting the number of hidden states configurations. I understand this is far from a proof but this is to me more representative of the fact that one does not want to use the hidden states as a new representation for a MoP, which we already knew.\\n\\nAdditionnally, the paper is very heavy on definitions and gives very little intuition about the meaning of the results. Theorem 29 is a prime example as it takes a very long time to parse the result and I could really have used some intuition about the meaning of the result. This feeling is reinforced by the length of the paper (18 when the guidelines mentioned 9) and the inclusion of propositions which seem anecdotal (Prop.7, section 2.1, Corollary 18).\\n\\nIn conclusion, this paper tackles a problem which seems to be too contrived to be of general interest. Further, it is written in an unfriendly way which makes it more appropriate to a very technical crowd.\", \"minor_comments\": [\"Definition 2, you have that C is included in {0, 1}^n. That makes C a vector, not a set.\", \"Proposition 8: I think that G_3 should be G_4.\"]}",
"{\"title\": \"review of When Does a Mixture of Products Contain a Product of Mixtures?\", \"review\": \"This paper compares the representational power of Restricted Boltzmann Machines\\n(RBMs) with that of mixtures of product distributions. The main result is that\\nRBMs can be exponentially more efficient (in terms of the number of parameters\\nrequired) to represent some classes of probability distributions. This provides\\ntheoretical justifications to the intuition behind the motivation for\\ndistributed representations, i.e. that the combinations of an RBN's hidden\\nunits can give rise to highly varying distributions, with a number of modes\\nexponential in the model's size.\\n\\nThis paper is very dense, and unfortunately I had to fast-forward through it in\\norder to be able to submit my review in time. Although most of the derivations\\ndo not appear to be that complex, they build on existing results and concepts\\nthat the typical machine learning crowd is typically unfamiliar with. As a\\nresult, one can be quickly overwhelmed by the amount of new material to digest,\\nand going through all steps of all proofs can take a long time.\\n\\nI believe the results are interesting since they provide a theoretical\\nfoundation to ideas that have been motivating the use of distributed\\nrepresentations. As a result, I think they are quite relevant to current\\nresearch on learning representations, even if the practical insights seem\\nlimited.\\n\\nThe maths appear to be solid, although I definitely did not check them in\\ndepth. I appreciate the many references to previous work.\\n\\nOverall, I think this paper deserves being published, although I wish it was\\nmade more accessible to the general machine learning audience, since in its\\ncurrent state it takes a lot of motivation to go through it. Providing\\nadditional discussion throughout the whole paper on the motivations and\\ninsights behind these many theoretical results, instead of mostly limiting them\\nto the introduction and discussion, would help the understanding and make the\\npaper more enjoyable to read.\", \"pros\": \"relevant theoretical results, (apparently) solid maths building on previous work\", \"cons\": \"requires significant effort to read in depth, little practical use\", \"things_i_did_not_understand\": [\"Fig. 1 (as a whole)\", \"Last paragraph of 1.1: why is this interesting?\", \"Fig. 5 (not clear why it is in some kind of pseudo-3D and what is the meaning\", \"of all these lines -- also some explanations come after it is referenced, which\", \"does not help)\", \"'(...) and therefore it contains distributions with (...)': I may be missing\", \"something obvious, but I did not follow the logical link ('therefore')\", \"I am unable to parse Remark 22, not sure if there is a typo (double 'iff') or\", \"I am just not getting it.\"], \"typos_or_minor_points\": [\"It seems like Fig. 3 and 4 should be swapped to match the order in which they\", \"appear in the text\", \"'Figure 3 shows an example of the partitions (...) defined by the models\", \"M_2,4 and RBM_2,3' -> mention also 'for some specific parameter values' to be\", \"really clear\", \"Deptartment (x2)\", \"Lebsegue\", \"I believe the notation H_n is not explicitly defined (although it can be\", \"inferred from the definition of G_n)\", \"There is a missing reference with a '?' on p. 9 after 'm <= 9'\", \"It seems to me that section 6 is also related to the title of section 5.\", \"Should it be a subsection?\", \"'The product of mixtures represented by RBMs are (...)': products\", \"'Mixture model (...) generate': models\"]}",
"{\"reply\": \"Thanks for the updated version, I've re-read it quickly and it's indeed a bit clearer!\"}",
"{\"title\": \"review of When Does a Mixture of Products Contain a Product of Mixtures?\", \"review\": \"The paper analyses the representational capacity of RBM's, contrasting it with other simple models.\\n\\nI think the results are new but I'm definitely not an expert on this field. They are likely to be interesting for people working on RBM's, and thus to people at ICLR.\"}",
"{\"review\": \"We thank all three reviewers for the helpful comments, which enabled us to improve the paper. We have uploaded a revision to the arxiv taking into account the comments, and respond to some specific concerns below.\\n\\nWe were unsure as to whether we should make the paper longer by providing more in-line intuition around the steps of the proof of our main results. This would address the concerns of Reviewers 6c04 and 51ff, who thought some additional intuition throughout would be helpful, while Reviewer 51ff felt that the paper was perhaps too long as it was. We elected to balance these concerns by making significant changes to improve clarity without greatly expanding the exposition, making a net addition of about a page of text. However, by moving some material to the appendix, the main portion of the paper has been reduced in length to 14 pages.\", \"responding_to_specific_comments\": \"\", \"reviewer_6c04\": \">> Things I did not understand: \\n>>- Fig. 1 (as a whole) \\n\\nWe have reworked this figure and improved the explanation in the caption; the intensity of the shading represents the value of log(k), that is the function $f(m,n) = min { log(k): mathcal{M}_{n,k}$ contains RBM_{n,m} }$. \\n\\n>>- Last paragraph of 1.1: why is this interesting? \\n\\nSince we are arguing that the sets of probability distributions representable by RBMs and MoPs are quite different, we thought it would be interesting to mention what is known about when these two sets do intersect. We have added a comment about this.\\n\\n>>- Fig. 5 (not clear why it is in some kind of pseudo-3D and what is the meaning of all these lines -- also some explanations come after it is referenced, which does not help) \\n\\nWe have reworked the figure and added additional explanation in the text where the figure is referenced. This is a picture of the interior of a 3-dimensional simplex (a tetrahedron with vertices corresponding to the outcomes (0,0), (0,1), (1,0), (1,1)), with three sets of probability distributions depicted. The curved set is a 2-dimensional surface. The regions at the top and bottom are polyhedra, and the lines in the original figure were the edges of these polyhedra (the edges in back have now been removed to make the rendering clearer). Additionally, we linked to an interactive 3-d graphic object of Fig. 5. Using Adobe Acrobat Reader 7 (or higher) the reader can rotate and slice this object in 3-d. \\n\\n>>- '(...) and therefore it contains distributions with (...)': I may be missing something obvious, but I did not follow the logical link ('therefore') \\n\\nWe expanded and rephrased this to hopefully be more clear.\\n\\n>>- I am unable to parse Remark 22, not sure if there is a typo (double 'iff') or I am just not getting it. \\n\\nWe rewrote this remark, sorry for the confusion. The meaning was that the three statements (X iff Y iff Z) are equivalent.\\n\\n>>Typos or minor points: \\n>> - It seems like Fig. 3 and 4 should be swapped to match the order in which they appear in the text \\n>>- 'Figure 3 shows an example of the partitions (...) defined by the models M_2,4 and RBM_2,3' -> mention also 'for some specific parameter values' to be really clear \\n>>- Deptartment (x2) \\n>>- Lebsegue \\n>>- I believe the notation H_n is not explicitly defined (although it can be inferred from the definition of G_n) \\n>>- There is a missing reference with a '?' on p. 9 after 'm <= 9' \\n>>- It seems to me that section 6 is also related to the title of section 5. Should it be a subsection? \\n>>- 'The product of mixtures represented by RBMs are (...)': products \\n>>- 'Mixture model (...) generate': models\\n\\nThank you, we fixed these.\", \"reviewer_51ff\": \">>In conclusion, this paper tackles a problem which seems to be too contrived to be of general interest. Further, it is written in an unfriendly way which makes it more appropriate to a very technical crowd. \\n>>- the fact that there are exponentially many inference regions for an RBM whereas there are only a linear number of them for a MoP seems quite obvious, merely by counting the number of hidden states configurations. I understand this is far from a proof but this is to me more representative of the fact that one does not want to use the hidden states as a new representation for a MoP, which we already knew.\\n\\nIn part this is simply a difference of philosophy. Some place greater emphasis on an intuition or demonstration on a dataset, while others prefer to see a proof. We recognize we may not have a lot to offer those comfortable relying upon their intuitive or empirical grasp of the situation, and instead aim to provide some mathematical proof to back up that intuition and satisfy the second group.\\n\\nIn trying to show that one class of models (RBMs or distributed representations) is better than another (here, non-distributed representations or naive Bayes models) at representing complex distributions, one must make a choice of criteria for comparison. One can pick, inevitably arbitrarily, a dataset for comparison and produce an empirical comparison. To provide a proof or theoretical comparison, one must choose a metric of complexity. Of course, we always want larger and more natural datasets and broader metrics, but one must start somewhere. We felt that in measuring the complexity of a distribution, the bumpiness of a probability distribution, or number of local maxima, modes, or strong modes in the Hamming topology was a reasonable place to start. While we examined other metrics of distribution complexity, this was one that provided enough leverage to distinguish the models. In the Discussion section, we talk about why multi-information, for example, is not suitable for making this distinction. Making such a choice of metric is the unfortunate price of theoretical justifications.\\n\\nAdditionally, the number of inference regions was not claimed to be new, but part of the exposition about the widespread intuition regarding distributed representations. We have added some exposition to clarify this.\", \"why_we_chose_mop\": \"we wanted to compare distributed representations with non-distributed representations. Since we are interested in learning representations, these should be two models with hidden variables that hold the representation. For a non-distributed model with hidden variables and the same observables as an RBM, the na'ive Bayes or MoP model is canonical. For example, a k-way interaction model might also be a good comparison, but it lacks hidden nodes.\\n\\n\\n\\n>>Additionnally, the paper is very heavy on definitions and gives very little intuition about the meaning of the results. Theorem 29 is a prime example as it takes a very long time to parse the result and I could really have used some intuition about the meaning of the result. This feeling is reinforced by the length of the paper (18 when the guidelines mentioned 9) and the inclusion of propositions which seem anecdotal (Prop.7, section 2.1, Corollary 18).\\n\\nSorry for the confusion. The introduction, as well as Figure 1 is devoted to explaining and interpreting Theorem 29. The statements therein such as 'We find that the number of parameters of the smallest MoP model containing an RBM model grows exponentially in the number of parameters of the RBM for any fixed ratio $0!<!m/n!<!infty$, see Figure 1' are hopefully more-intuitive corollaries of Theorem 29. The structure of the paper is to try to put the intuitive explanation of the results first, then give the (necessarily technical) proof showing how the results were obtained. We have added a pointer before Theorem 29 to indicate this.\\n\\nIn the revision we added explanations providing additional intuition as to why we are making certain definitions, and a road map of how the main results are proved.\\n\\n>>Minor comments: \\n>> - Definition 2, you have that C is included in {0, 1}^n. That makes C a vector, not a set. \\n\\nNo, a subset as we write $mathcal{C} subset mathcal{X}$ of the set of (binary) strings $mathcal{X}$ of length n is again a set of (binary) strings. One could of course interpret it in terms of a vector of indicator functions, but this is not the approach needed here.\\n\\n>> - Proposition 8: I think that G_3 should be G_4.\\n\\nSorry for the confusion. Again this is correct as is; G_4 would refer to binary strings of length 4, while the Proposition concerns strings of length 3.\"}"
]
} |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 0