forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
33X9fd2-9FyZd
Auto-Encoding Variational Bayes
[ "Diederik P. Kingma", "Max Welling" ]
Can we efficiently learn the parameters of directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions? We introduce an unsupervised on-line learning method that efficiently optimizes the variational lower bound on the marginal likelihood and that, under some mild conditions, even works in the intractable case. The method optimizes a probabilistic encoder (also called a recognition network) to approximate the intractable posterior distribution of the latent variables. The crucial element is a reparameterization of the variational bound with an independent noise variable, yielding a stochastic objective function which can be jointly optimized w.r.t. variational and generative parameters using standard gradient-based stochastic optimization methods. Theoretical advantages are reflected in experimental results.
[ "variational bayes", "parameters", "directed probabilistic models", "presence", "continuous latent variables", "intractable posterior distributions", "unsupervised", "learning", "variational lower bound", "marginal likelihood" ]
submitted, no decision
https://openreview.net/pdf?id=33X9fd2-9FyZd
https://openreview.net/forum?id=33X9fd2-9FyZd
ICLR.cc/2014/conference
2014
{ "note_id": [ "vn-PvD2UEmnyw", "NAoK5VPWK8Nm3", "8jCZt9B-uYtDE", "h_5g8aM13EhoG", "iiWhiIg_AYM1I" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1392691860000, 1390782300000, 1391850300000, 1389363660000, 1391491680000 ], "note_signatures": [ [ "Durk Kingma" ], [ "anonymous reviewer bd7f" ], [ "anonymous reviewer 79a6" ], [ "Durk Kingma" ], [ "anonymous reviewer 62c4" ] ], "structured_content_str": [ "{\"review\": \"Reviewers, thank you for your overall very positive and constructive reviews. Your points of critique are highly appreciated and will be incorporated into a revised version.\\n\\nBelow we listed your key points, followed by our responses.\\n\\n==> Reviewer bd7f: \\u201cCouldn\\u2019t an online Monte Carlo EM approach be followed?\\u201d\\n\\nWe tried an online version of Monte Carlo EM algorithm, but found that the online version didn\\u2019t compare favourably to the batch Monte Carlo EM algorithm used in experiments. The main reason is that the computational cost per E-step grows with the size of the dataset. With larger datasets, each minibatch gets processed more rarely, and the posterior distribution changes to a higher degree between epochs, such that more MCMC samples are required to get an approximately unbiased sample from the new posterior given a sample from the old posterior computed at the previous epoch.\\n\\n==> Reviewer 79a6: \\u201cThe experiments compare favourably [\\u2026] but don\\u2019t give the reader a good sense of the trade-offs. A simple way to rectify this would be a discussion of the training: how sensitive the model is to various hyperparameter choices, how robust the model is to overfitting, etc.\\u201d\\n\\nWe\\u2019ll include a discussion of the trade-offs. Some of our responses below also relevant to this point.\\n\\n==> Reviewer 79a6: \\u201cDifferent noise distributions are discussed, but they are not compared. What are the implications of choosing different classes of perturbation?\\u201d\\n\\nInteresting question. One important aim is to chose an approximate posterior such that KL(q||p), the divergence of the approximate posterior q(z|x) from the true posterior p(z|x), can be minimized well. We chose a Gaussian since modes of posteriors are often locally approximately quadratic, in which case a Gaussian q(z|x) and leads to a low KL(q||p). We did perform some experiments with student-T (heavy-tailed) noise distributions, which did not improve the reported performance. Performance of this experiment were not included, since the main aim of the experiments is to compare to other approaches, rather than finding the optimal hyper-parameter setting.\\n\\n==> Reviewer 79a6: \\u201cHow powerful is the regularization effect of the variational bound? At what point does overfitting occur, or do things simply plateau when more latent variables are added?\\u201d\\n\\nTrain and test performances indeed plateau when increasing the dimensionality of latent space beyond necessity. For the models of MNIST, only about ~20 dimensions were used by the model; the others were 'switched off' automatically: after training their incoming and outgoing weights were approximately zero, and their posterior distributions were always (practically) equal to their prior distributions. Everything else being equal, a model with a latent dimensionality of 200 performed almost identical to a model with latent dimensionality of 20 (see figure 2).\\n\\n==> Reviewer 79a6: \\u201cAn important baseline would be to measure the effectiveness of this approach on held-out data. For example, rather than training this variational bound, one could perhaps use a regular auto-encoder with a similar architecture and train for reconstruction error. How would the authors expect the test-set reconstruction error of their approach to compare to this baseline? Is the only difference between the proposed approach and a standard auto-encoder the variational regularizer (first term of equation 10)?\\u201d\\n\\nBesides the regularizer, an important distinctive feature of the variational auto-encoder is the noisy activation of the central hidden layer, distributed as q(z|x). The noise does not contributing to the reconstruction error, and can be tuned down by changing the encoder weights. The effect of removing the regularization term would be that the best solution is to ignore the noise (i.e. the variance of q is set to approximately zero). This would lead to a high KL divergence between the true and approximate posteriors, consequently a low marginal likelihood of the model, and most probably a high reconstruction error on held-out data.\\n\\n==> Reviewer 79a6: \\u201cIntuitively, what is the regularizer provided by the variational bound doing to the encoding parameters? What would the regularization provided by the full variational approach do to the decoding parameters?\\u201d\\n\\nThe regularizer makes sure that q(z|x) does not diverge too much from the prior p(z). In practice this means that excess latent dimensions are \\u201cswitched off\\u201d; for the fully connected models in the paper this means that all the incoming and outgoing connection weights corresponding to excess dimensions are set to zero. Therefore, the dimensionality of latent space does not require much tuning, and should simply be set to a high enough number.\\n\\nImportant to note is that overfitting can still occur simply by choosing too many units for the hidden layers of the encoder and/or decoder (i.e. the layers sandwiched between \\u2018x\\u2019 and \\u2018z\\u2019). In our comparative experiments we did not tune the amount of hidden units, and simply chose a regime educated by settings that work well for other auto-encoder architectures; we did not encounter overfitting problems in these experiments.\\n\\n==> Reviewer 79a6: \\u201cIn equation (6), should z^{(l)} be z^{(i,l)}?\\u201d\\n\\nIndeed.\"}", "{\"title\": \"review of Auto-Encoding Variational Bayes\", \"review\": \"This paper presents a variational learning algorithm for probabilistic graphical model embedded with a latent representation that is continuous-valued. The proposed algorithm is stochastic in nature. It is able to scale to large datasets and deal with models for which tractable EM is not an option. It is based on gradient descent on a Monte Carlo approximation of the standard variational lower-bound on the marginal likelihood. Crucially, in order to obtain an unbiased gradient estimate, the Monte Carlo approximation is taken such that the sampling of the MC points is independent of the parameters of the model, thanks to an 'alternative parametrization trick' of the variational posterior. An interesting connection is also made with autoencoders. Experiments show that the algorithm can improve over the wake-sleep algorithm.\\n\\nI really like this paper. The idea is simple and non-trivial. The alternative parametrization trick is a nice tool to have in one's toolbox. The connection with autoencoders is also really nice.\\n\\nThe only 'cons' I see is that the experiments are somewhat limited, in the sense that they are performed only on MNIST and Frey Faces, and only log-likelihood or visualization results are reported (as opposed to experiments in the context of a real application). But I still think this is good first step in a potentially promising direction for training complex non-linear continuous representation models. So I'd argue that this work deserves to get in.\", \"minor_comments\": [\"Couldn't an online Monte Carlo EM approach be followed, where each update of the M step would be performed on a minibatch?\", \"This is super minor, but NADE isn't just applicable to binary observations and actually has a real-valued extension (RNADE: The real-valued neural autoregressive density-estimator, by Uria, Murray and Larochelle, NIPS 2013). Otherwise however, as the authors mention, it's not directly related to the approach in this paper.\"]}", "{\"title\": \"review of Auto-Encoding Variational Bayes\", \"review\": \"This paper proposes a variational approach for training directed graphical models with continuous latent variables and intractable posteriors/marginals. The idea is to reparameterize the latent variables so that they can be written as a deterministic mapping followed by a stochastic perturbation. This allows Monte Carlo estimators of the variational lower bound to be differentiated with respect to the variational parameters.\\n\\nThe proposed method is novel and interesting, and seemingly more practical than previous approaches. The experiments favour comparably and show that this method is quite promising, however they still feel preliminary and don\\u2019t give the reader a good sense of the trade-offs and difficulties one might encounter using this approach. A simple way to rectify this would be a discussion of the training: how sensitive the model is to various hyperparameter choices, how robust the model is to overfitting, etc. Another way would be to compare against a tractable model such as PPCA, and show how the approach compares to EM and exact inference. The results here would give a sense of how the method might perform on intractable cases.\\n\\nOn a high level, the auto-encoder presented here reminds me of the back-constrained GP-LVM [1]. The goals are different, but the models seem similar.\\n\\n[1] Local Distance Preservation in the GP-LVM through Back Constraints, Neil D. Lawrence and Joaquin Quinonero-Candela, ICML 2006\", \"questions\": \"-Different noise distributions are discussed, but they are not compared. What are the implications of choosing different classes of perturbation?\\n\\n-How powerful is the regularization effect of the variational bound? At what point does overfitting occur, or do things simply plateau when more latent variables are added?\\n\\n-An important baseline would be to measure the effectiveness of this approach on held-out data. For example, rather than training this variational bound, one could perhaps use a regular auto-encoder with a similar architecture and train for reconstruction error. How would the authors expect the test-set reconstruction error of their approach to compare to this baseline? Is the only difference between the proposed approach and a standard auto-encoder the variational regularizer (first term of equation 10)?\\n\\n-Intuitively, what is the regularizer provided by the variational bound doing to the encoding parameters? What would the regularization provided by the full variational approach do to the decoding parameters?\", \"typos_and_grammar\": \"-The first sentence could perhaps be reworded slightly.\\n\\n-Please remove \\u201cis\\u201d from \\u201cis even works\\u201d in the sentence \\u201dConversely, we are here interested in a general algorithm that is even works efficiently in the case of\\u2026\\u201d\\n\\n-In equation (6), should z^{(l)} be z^{(i,l)}?\"}", "{\"review\": \"We have just submitted a new version to arXiv.\\n\\nIn previous versions, we treated both the MNIST and Frey Face datasets as binary, where the pixel values (normalized to the interval (0,1)) were treated as probabilities, using the negative binary cross-entropy function in place of log p(x|z). This use of the binary cross-entropy function is common practice, and is equivalent (in expectation of the objective and gradients) to log p(x|z) with binary samples drawn from the data according to the probabilities indicated by the pixel values. While this treatment of pixel values makes sense for MNIST (the dataset is almost binary anyway), it makes less sense for Frey Face.\\nWe therefore ran new experiments where the Frey Face data was treated appropriately as continuous data. We chose log p(x|z) to be an isotropic Gaussian, with a conditional distribution identical to log q(z|x), with a minor difference that the predicted means were constrained to the interval (0,1) by appending a sigmoid activation function to the MLP output units that correspond to the predicted mean of log p(x|z). \\n\\nAs shown in the new version of the paper, the wake-sleep algorithm had much more difficulty with learning this (more appropriate) model of the Frey Face data.\\n\\nAnother difference is that models were allowed to train for a longer period of time, and results for higher dimensional latent space were included. Most interestingly, superfluous dimensionality of latent space did not result in increased overfitting, which is explained by the regularizing effect of the prior and entropy terms of the objective function.\"}", "{\"title\": \"review of Auto-Encoding Variational Bayes\", \"review\": \"This paper adresses the problem of learning the parameters of directed probabilistic models with continuous latent variables when the latent posterior is intractable.\\n\\nThe proposed method (Auto-Encoding Variational Bayes or AEVB for short) can be summarized as:\\n 1 - Maximization of the variational lower bound as a proxy for the marginal log-likelihood.\\n 2 - Estimation of this lower-bound using a Monte-Carlo estimation.\\n 3 - Using a re-parametrization trick for the parametric variational inference approximation q_{\\theta}(z|x), i.e. expressing it as a deterministic function f(x,e) of x and e, where e is a random variable.\\n\\nThe AEVB method is in theory applicable to a wide range of choices for p(x|z), p(z) and q(z|x). In their study, the authors propose a specific application where the prior p(z) is a simple gaussian centered on 0 and where p(x|z) and q(z|x) are both Gaussian distributions with their means and covariance matrices being determined by the output of a neural network. For instance q(z|x) is defined as\\n\\tq(z|x) = N(z;mu,sigma^2*I) with mu and sigma being determined from x using a neural network, i.e. mu = f_1(x) and sigma = f_2(x), with f_1 and f_2 deterministic.\\n\\nThis work is closely related to several recent contributions (Generative Stochastic Networks,Denoising Auto-Encoder theory) which are properly discussed.\", \"several_experiments_are_run_on_both_the_frey_faces_dataset_and_the_mnist_dataset\": [\"quantitatively comparing the lower bound with AEVB vs with the wake sleep algorithm.\", \"quantitatively comparing the marginal likelihood (with small latent space as is to be expected) with AEVB vs with the wake sleep algorithm.\", \"showing visualizations of 2D manifolds learned with AEVB\", \"showing random samples obtained by sampling from the learned generative model\", \"The proposed approach (AEVB) offers a new and exciting solution to the well known problem of estimating the parameters of a graphical model. Both the variational inference and generative distribution can be quite complex as they are both represented using neural networks with hidden layers, however the training criterion is tractable and includes hyper-parameter-free regularization terms which can prevent overfitting, as supported by the experiments.\"], \"pros\": [\"Very good summary of previous work / very good discussion of related work.\", \"The approach is tractable even with intractable latent posteriors.\", \"The training criterion contains hyper-parameter free regularization.\", \"The experiments fully support the practical applicability of the approach.\"], \"cons\": [\"Experiments on more complex datasets would have made for an even more convincing argument.\"]}" ] }
HH-uZ8U2O1aWf
Deep and Wide Multiscale Recursive Networks for Robust Image Labeling
[ "Gary B. Huang", "Viren Jain" ]
Feedforward multilayer networks trained by supervised learning have recently demonstrated state of the art performance on image labeling problems such as boundary prediction and scene parsing. As even very low error rates can limit practical usage of such systems, methods that perform closer to human accuracy remain desirable. In this work, we propose a new type of network with the following properties that address what we hypothesize to be limiting aspects of existing methods: (1) a `wide' structure with thousands of features, (2) a large field of view, (3) recursive iterations that exploit statistical dependencies in label space, and (4) a parallelizable architecture that can be trained in a fraction of the time compared to benchmark multilayer convolutional networks. For the specific image labeling problem of boundary prediction, we also introduce a novel example weighting algorithm that improves segmentation accuracy. Experiments in the challenging domain of connectomic reconstruction of neural circuity from 3d electron microscopy data show that these 'Deep And Wide Multiscale Recursive' (DAWMR) networks lead to new levels of image labeling performance. The highest performing architecture has twelve layers, interwoven supervised and unsupervised stages, and uses an input field of view of 157,464 voxels ($54^3$) to make a prediction at each image location. We present an associated open source software package that enables the simple and flexible creation of DAWMR networks.
[ "deep", "robust image", "image", "boundary prediction", "methods", "view", "feedforward multilayer networks", "supervised learning", "state" ]
submitted, no decision
https://openreview.net/pdf?id=HH-uZ8U2O1aWf
https://openreview.net/forum?id=HH-uZ8U2O1aWf
ICLR.cc/2014/conference
2014
{ "note_id": [ "ogt7uh3Y7suLd", "9zq_LwBP3U9bA", "WWsaWx95KukwB", "1cSPcFTqMt9Lf", "sQS9F67Of2QaU", "KEpWdQPiWrEqF" ], "note_type": [ "comment", "review", "comment", "comment", "review", "review" ], "note_created": [ 1392750540000, 1391830680000, 1392752400000, 1392750060000, 1391841720000, 1391828100000 ], "note_signatures": [ [ "Gary Huang" ], [ "anonymous reviewer cf06" ], [ "Gary Huang" ], [ "Gary Huang" ], [ "anonymous reviewer d8ff" ], [ "anonymous reviewer 395a" ] ], "structured_content_str": [ "{\"reply\": \"Thank you for the comments.\\n\\nWe have experimented with single network classifiers that have increased field of view due to additional downsampling layers. These additional (3x, 4x) downsampling layers yield comparable performance to the multiscale (1x and 2x downsampling) architecture presented in the main paper. This suggests that the recursively stacked architectures are giving increased performance due to the increased depth rather than field of view alone.\\n\\nRelated to this, in A.2 of the Supplementary section, we present results using a deeper unsupervised architecture. This has slightly larger field of view than the stacked classifiers, but somewhat worse performance, again suggesting the benefit of the recursive stacking.\\n\\nWe experimented with deeper MLP classifiers, and report results in the supplementary section of the paper (Table 11). Going to 2 to 3 hidden layers gives some amount of improvement.\\n\\nWe report number of parameters in the models used in the paper in Table 5.\\n\\nWe will also be releasing the data in addition to the code. One of the distinguishing features of our problem domain is the amount of data, where future data sets will require processing of data sets orders of magnitude larger than the 620^3 volumes used in the paper. Our methods were designed with scalability and fast training and inference in mind, and therefore we believe our results and paper should be of interest to other researchers working on problems similarly involving large data sets.\"}", "{\"title\": \"review of Deep and Wide Multiscale Recursive Networks for Robust Image Labeling\", \"review\": \"This paper presents the application of a composite classification system to the classification of 3D microscopy scans.\\nThe problem is formulated as predicting an affinity graph (binary classification 'pixels belong to same foreground object' vs other cases).\", \"the_classifier_is_a_composite_construction_from_supervised_and_unsupervised_methods\": \"vector quantization, pooling, subsampling, whitening and a 1-hidden-layer neural network.\\n\\nAlternative classifier architectures are explored (using multiscale or single scale VQ, \\n\\nThese classifiers are stacked ('recursively') with the resulting classifier performing better, though it is not clear where the improvement comes: increased depth or increased field of view.\\n\\nHow does this work if you make the classification MLP deeper? \\n\\nI would like to see comparisons of the different techniques in terms of number of parameters. \\n\\nDescribes an interesting composite architecture on a specialist problem, with 3D data. It is interesting to see the comparison of this composite classifier approach and the effectiveness of stacking.\", \"pros\": \"Interesting comparison of a more hand-built approach to a CNN.\\nInteresting to see application to 3D data\\nInteresting to see the effectiveness of stacking.\", \"cons\": \"A somewhat specialist problem (and proprietary data) limit the audience and\\napplicability of the results.\"}", "{\"reply\": \"Thank you for the comments.\\n\\nIt seems the main concern is that the model presented in the paper is domain specific, with limited conclusions or take-aways for researchers in other areas. We believe that our paper outlines a general strategy for researchers working on problems with large amounts of data or who are otherwise concerned with training and inference speed.\\n\\nThis strategy essentially is one of replacing a complex, multi-layered supervised algorithm with a simple unsupervised feature learning algorithm followed by a simpler/shallower supervised algorithm. Moreover, the unsupervised and supervised steps can be interleaved in a recursive manner to create a deeper architecture. This general strategy is more amenable to parallelization, giving both better accuracy and greatly reducing training time.\\n\\nWhile components of the DAWMR networks such as multi-scale and drop-out could be added to the CNN baseline or other orthodox architectures, there would still exist the difference in training time. This is especially significant because shorter training times mean that these extensions and variations can be tested more quickly, and also as training and test set sizes will only increase in the future, particularly for the neural reconstruction problem presented in the paper.\\n\\nLastly, although our final architecture is made of many different components, through extensive experimentation, such as those detailed in the appendix, we have tried to separate out and evaluate the contribution of each component, at least within the context of our overall framework. For instance, we can see the relative improvement gained by incorporating multi-scale and drop-out, or as we mention in the response to the review below (cf06), we can get a sense of whether deeper processing layers or increased field of view is driving the increased performance of the recursive classifiers. We hope that these results will be helpful to other researchers in determining which aspects of our paper may be relevant and worth pursuing in their own work.\"}", "{\"reply\": \"Thank you for the comments.\\n\\nOur DAWMR networks can also be applied to 2d images. However, one of the main focuses of our work was to develop a method that is scalable to very large data sets. For instance, our training and test volumes were 620^3 voxels, and we intend to scale to problems that are more than 10 times this size in each dimension. So this can preclude more sophisticated learning algorithms that are computationally tractable for smaller 2d data sets. Therefore a strict comparison against state-of-the-art 2d segmentation methods may be of limited use.\\n\\nOur use of multiple metrics for evaluation is motivated by the fact that our networks are often used as part of a larger pipeline, where the network output is thresholded and used to produce supervoxels, which are input to the next step of the pipeline. The ideal metric would therefore be based on the final output produced by the entire pipeline. However, we present both local edge prediction metrics as well as more global segmentation metrics to give a more complete sense of the quality of the network output and how it would likely influence the final pipeline output accuracy. We note though that the differences between competing architectures (e.g., Table 6 in the paper) are generally both large and consistent across metrics.\"}", "{\"title\": \"review of Deep and Wide Multiscale Recursive Networks for Robust Image Labeling\", \"review\": \"This work presents an approach for 3d imaging analysis that tries to improve on previous work by addressing the limiting aspects of the existing methods. The paper presents 3 main contributions:\\na fast-training wide network, a recursive approach, and a weighting scheme to pull the focus of the training towards harder cases.\\n\\nThis paper is well written, well structured and flows well. The results show that the methods proposed \\nimprove performance on the 3d image annotation task. The 3 contributions are tested and stacked one-by-one and each is shown to improve performance.\", \"genereal_comment\": \"Can the improvements proposed in this paper be applied to 2d images as well? It would have been interesting to see an application to a benchmark dataset to have a comparison with current state-of-the-art methods.\", \"pros\": [\"Well justified improvements that show clear improvements.\", \"Open source software is presented.\"], \"cons\": [\"Many metrics are presented, though, as a non-expert, it is hard to judge which metrics are more relevant.\"]}", "{\"title\": \"review of Deep and Wide Multiscale Recursive Networks for Robust Image Labeling\", \"review\": \"An architecture for labeling neural components in electron microscopy images is presented based on a combination of unsupervised feature learning and supervised classification. The proposed system learns a basic feature representation using vector quantization applied to small patches. Several strategies for combining the extracted features into a hidden representation are proposed, and these features are passed to a supervised learning stage to predict an \\u201caffinity graph\\u201d which encodes the segmentation of image components. Subsequent stages of features are generated by computing additional features from the predicted affinity graph and the original input image in the hope of learning features that better represent the output domain. Several modifications are proposed to improve performance, including a \\u201chard example mining\\u201d strategy to up-weight regions of input images that are difficult to segment properly. Results are presented on a test set, with validation numbers provided for numerous variations of the proposed architecture. It is shown that the best system widely outperforms a baseline conv-net on many quality metrics.\\n \\nThis paper covers an interesting application domain that requires a reasonably scalable learning approach as well as some novel components to properly predict the structures desired. In that direction, it may be interesting to other experts wanting to achieve maximum performance on this application. \\n\\nOther than the CNN baseline, it is hard to know how difficult this problem is, since the dataset is apparently novel. On the other hand, the paper takes a \\u201ckitchen sink\\u201d approach to selecting an appropriate algorithm (as evidenced by the complex pipeline and extensive appendix with additional variations). It is, thus, difficult to know whether the improvements are coming from these particular innovations or from some other source. For example, in some cases it is clear that the addition of multi-scale or more layers of features reduces the training error significantly and it could be that other more orthodox architectures could achieve similar results. What insight should a reader interested in other domains distill from these results?\", \"pros\": \"Interesting application involving a structured output, though similar to image segmentation.\\nAuthors cover a wide search through a novel type of architecture to yield a successful labeler. May be valuable to domain experts.\", \"cons\": \"A complex pipeline that combines many components [from prior art], making it hard to see what is contributed outside of this particular application.\\nDifficult for non-expert to judge quality of results.\", \"other_comments\": \"The addition of dropout appears to enhance generalization (along with several of the other modifications). This might help the CN baseline as well. It may be useful / interesting to understand what explains the improved generalization. (E.g., is the CN overfitting, and this avoided by the use of unsupervised training?)\"}" ] }
YHGzHsybzQU0l
Factorial Hidden Markov Models for Learning Representations of Natural Language
[ "Anjan Nepal", "Alexander Yates" ]
Most representation learning algorithms for language and image processing are local, in that they identify features for a data point based on surrounding points. Yet in language processing, the correct meaning of a word often depends on its global context. As a step toward incorporating global context into representation learning, we develop a representation learning algorithm that incorporates joint prediction into its technique for producing features for a word. We develop efficient variational methods for learning Factorial Hidden Markov Models from large texts, and use variational distributions to produce features for each word that are sensitive to the entire input sequence, not just to a local context window. Experiments on part-of-speech tagging and chunking indicate that the features are competitive with or better than existing state-of-the-art representation learning methods.
[ "features", "representation", "word", "representations", "global context", "natural language", "algorithms", "language" ]
submitted, no decision
https://openreview.net/pdf?id=YHGzHsybzQU0l
https://openreview.net/forum?id=YHGzHsybzQU0l
ICLR.cc/2014/conference
2014
{ "note_id": [ "YYMOsvf5O_svq", "rrPhwCd1KswBT", "Uy5xO0qPmo2CA", "eXJaY4DxhuXnF", "hcqO1f7vjQh68", "_XBNcqV3Dycxk", "3bUlnTHBjonIL" ], "note_type": [ "comment", "review", "review", "comment", "review", "review", "comment" ], "note_created": [ 1392897900000, 1391785140000, 1391736360000, 1392742200000, 1391806560000, 1391650680000, 1392794640000 ], "note_signatures": [ [ "Anjan Nepal" ], [ "anonymous reviewer 9228" ], [ "anonymous reviewer 590a" ], [ "Anjan Nepal" ], [ "anonymous reviewer 198e" ], [ "anonymous reviewer 9228" ], [ "Anjan Nepal" ] ], "structured_content_str": [ "{\"reply\": \"Anonymous 198e:\\nThank you for your comments.\\nWe have added the citations in the paper that are relavant and also tried to clarify what is borrowed from the previous work and what is novel. We try to answer other comments below.\\n\\n\\nThe terms corresponding to S_1 are missing from P({S_t}|{Y_t}) and Q({S_t}|{Y_t}).\\n\\n- We did this for simplicity. This is stated in the paper: 'initial distribution is defined the same way except that we drop the indicators for the previous time step and use parameters \\theta_mk rather than \\theta_mjk'.\\n\\n\\nIn equations (3), (12), and (13), the denominator contains S^m_{t-1,j} instead of S^m_{t,k'}.\\n\\n- We believe that the equations are correct.\\n\\n\\nIt would be cleaner to parameterize the multinomial distributions directly in terms probabilities (as was done by Ghahramani and Jordan) instead of log-unnormalized-probabilities.\\n\\n- In the derivation, we did not assume that these distributions in the original and the variational model are equal but the minimization of the KL-divergence shows that they are equal. However, your suggestion can make the equations look cleaner. We will update the paper soon to reflect the changes.\\n\\n\\nThe parameterization of the variational posterior in equation 13 is odd. Why are the transition terms explicitly normalized but the observation terms are not?\\n\\n- Transition parameters are explicitly normalized to make them similar to the parameters in the original model. If we use the distributions directly in terms of probabilities (according to the previous comment), then this will also simplify. For the observation variational parameters, we did not include the normalization in the equations to make the calculations easier. First we find the phi parameters according to the equation 19. But to use them in the forward-backward, they need to provide proper probability distribution. Hence, we locally normalize them (per layer and timestep). i.e. divide by sum_k ( exp (phi_mtk) ) for all m and t.\\n\\n\\nSince distributed but not context-dependent representations are not included in the evaluation, it is not possible to disentangle the effects of those two factors on the performance of the FHMM representations. \\n\\n- We have updated the paper with the results using the 50-dimensional word-embeddings trained using neural network based on Collobert and Weston (2008) on our data.\\n \\n\\nRepresentations learned by neural language models as well as the 'non-temporal FHMM' (effectively a distributed mixture model) would be the natural baselines for this. \\n- We agree that the non-temporal FHMM would be a nice comparison, and are working to try to include it in a final version. We ask that in the mean time, the reviewer look at the new results we have added for the neural language model, which the FHMM outperforms. Also note that Brown clusters (results included from our original submission) provide another form of 'non-temporal' clustering, in that once the clusters are trained, they are functions of word type only, and not the local context of a token.\"}", "{\"review\": \"One extra comment: the approach bears some similarity to the model of Grave, Obozinski and Grave (CoNLL 2013), where they associate latent variables with nodes in a syntactic dependency tree and draw words conditioned on the variables. As in this paper, Grave et al. do not perform hard clustering (as in Brown clustering) but rather approximate posteriors at test time (based on the entire sentence).\"}", "{\"title\": \"review of Factorial Hidden Markov Models for Learning Representations of Natural Language\", \"review\": \"The author use a Factorial Hidden Markov Model (FHMM) on two Natural Language Processing tasks, namely POS tagging and Chunking. Such a model associates a factorial state (a tuple of states) with each word position in the corpus. This can be interpreted as a left-context dependent word representation. In order to evaluate the quality of this representation, the authors train the representation on WSJ data from the Penn repository and test on biomedical text. This domain adaptation performance is compared with that of a variety of systems (no learned features, HMM features, Brown clusters). Although the approach makes sense, the empirical evaluation leaves questions open. For instance, before seeing domain adaptation performance results, I would have liked to know whether the FHMM representation leads to competitive performance when testing on similar data. Also the domain adaptation performance is not compared with the performance afforded by some of the readily available word embeddings (other than Brown clusters.) In conclusion, I do not really know what to think of the performance of FHMMs on such tasks.\"}", "{\"reply\": \"Thank you for your comments. We try to answer them below:\\n1, 2 and 3: We have updated the paper by taking your suggestions in consideration. In short, 1)we have properly cited the original FHMM model where necessary, 2) They are found by setting the derivative of the objective function wrt phi to zero which has a closed form solution and 3) We updated the results by using the prefixes of size 4, 6, 10 and 20 in addition to the whole path of the brown clusters.\\n4. Although it is an interesting idea, we have not used it in our current results.\\n5. We have updated the paper with the results from using 50-dimensional word embedding trained on our data using Collobert and Weston (2008).\", \"6_and_extra_comment\": \"Thank you for the pointers on the related work. We have added them in our previous work section.\"}", "{\"title\": \"review of Factorial Hidden Markov Models for Learning Representations of Natural Language\", \"review\": \"The authors propose using a factorial hidden Markov model (FHMM), which is an HMM with multiple latent Markov chains cooperating to produce observations, to induce context-dependent word representations from text. They derive a variational EM algorithm for training these models efficiently and use a structured variational posterior consisting of independent Markov chains to capture the sequential structure of the exact posterior. Given a sequence of words, the authors obtain their representations by computing the variational posterior for the sequence and representing each word with the resulting distribution over the states of the latent variables associated with the timestep that generated the word.\\n\\nAlthough HMMs have been used to induce word representations before, using factorial HMMs for this task is novel. The variational algorithm for training FHMMs on multinomial observations, such as words, derived in the paper appears to be new as well.\\n\\nThe idea of using FHMMs to obtain context-dependent word representations is a sensible one. Unfortunately, the paper is not particularly well written and the experimental evaluation is not sufficiently convincing. The variational EM algorithm derived by the authors is a relatively minor modification of the algorithm of Ghahramani and Jordan (1997), something the paper does not state clearly enough. The form of the variational posterior comes from the same paper. The bound in Eq. 15 has already been used in a very similar setting by Blei and Lafferty (2007). The presentation of the derivation is unclear and contains a number of small mistakes. For example:\\n-The terms corresponding to S_1 are missing from P({S_t}|{Y_t}) and Q({S_t}|{Y_t}).\\n-In equations (3), (12), and (13), the denominator contains S^m_{t-1,j} instead of S^m_{t,k'}.\\n-It would be cleaner to parameterize the multinomial distributions directly in terms probabilities (as was done by Ghahramani and Jordan) instead of log-unnormalized-probabilities.\\n-The parameterization of the variational posterior in equation 13 is odd. Why are the transition terms explicitly normalized but the observation terms are not?\\n\\nThe experimental section does have some interesting results but suffers from a non-standard experimental protocol and absence of some obvious baselines. The representations produced by FHMM are interesting because they are distributed and context-dependent. Since distributed but not context-dependent representations are not included in the evaluation, it is not possible to disentangle the effects of those two factors on the performance of the FHMM representations. Representations learned by neural language models as well as the 'non-temporal FHMM' (effectively a distributed mixture model) would be the natural baselines for this. Until this issue is addressed, it will remain unclear how much context dependence contributes and whether using FHMMs instead of simpler models is worth the effort. Finally, it is unfortunate that the authors chose not to use exactly the same training data as in [25] and [6], making it impossible to compare their results to those in the literature.\"}", "{\"title\": \"review of Factorial Hidden Markov Models for Learning Representations of Natural Language\", \"review\": \"The paper looks into representing words as (approximate) posterior distributions of states in a Factorial HMM estimated on a unlabeled dataset of a moderate size. Unlike standard word representations (e.g., obtained with the neural probabilistic model of Bengio et al. (2003)), these representations encode not only properties of a word but also properties of the word context (in theory unbounded) . For example, this may result in a form of word sense disambiguation. Of course, this come at the cost of performing inference at test time. The authors evaluate their approach on PoS tagging and chunking tasks.\\n\\nI find the paper quite interesting, as using some form of disambiguation does seem like a good idea to me.\", \"comments\": \"1) The authors use a variational approximation which can be regarded as a form of the structured variational method for Factorial HMMs introduced in Ghahramani & Jordan (MLJ 1997) (not cited). It would be good to explain which parts of the inference algorithm (e.g., relaxation for softmax) are novel, and which are borrowed from previous work.\\n2) I do not quite understand how exactly the variational parameters phi are computed -- something like Newton-Raphson? (In the paper: 'these expectations are used to find the new set of variational parameters', par 1, page 6)\\n3) I wonder if Brown clusters are used as atomic variables or the features are paths in the binary tree (as in Koo et al. (ACL 2008)). Using paths may help substantially. \\n4) Actually, factorial HMMs induce other types of word representations as well: parameters associated with emission distributions can perhaps be regarded as such representations. I am wondering if the authors considered using them as an additional baseline. (Of course, this representation would not be affected by the context.)\\n5) It would be nice to see methods using word embeddings produced by more conventional methods as additional baselines. This would make the paper more convincing.\\n6) Stochastic neural network models (namely sigmoid belief networks) for syntactic parsing of Titov & Henderson (ACL 2007) and Henderson & Titov (JMLR 2010) may also be relevant. They learn representations of parsing states (and some of their states corresponds to word emissions). The posterior distribution of the state variables are affected by global context. They also use variational approximation methods somewhat similar to the ones considered in this submission.\"}", "{\"reply\": \"Thank you for your comments. We would like to make a minor correction to your statement that the word representations are dependent on both the left and right contexts in the sentence, not just the left-context.\", \"we_would_like_to_answer_few_of_your_comments_below\": [\"before seeing domain adaptation performance results, I would have liked to know whether the FHMM representation leads to competitive performance when testing on similar data.\", \"We performed an experiment with the PoS tagging experiment with an in-domain setting (WSJ, trained on 38K sentences and tested on 2K sentences). Compared to the baseline, use of 5 layered 10 states FHMM reduced the error on all words from 3.27 to 2.83 and on OOV words from 4.86 to 4.32. We did not include these results in the paper because we only focused on the use of FHMMs as representation model for domain adaptation. We wanted to see if FHMM can give representations which are common to both domains and hence helps the supervised classifier in improving the accuracy on the target domain. From the results, we see that is the case.\", \"Also the domain adaptation performance is not compared with the performance afforded by some of the readily available word embeddings\", \"We have updated the paper adding the results using 50-dimensional word embeddings trained using neural network based on Collobert and Weston (2008) on our data.\"]}" ] }
srkxraD5zAMCX
Correlation-based construction of neighborhood and edge features
[ "Balázs Kégl" ]
Motivated by an abstract notion of low-level edge detector filters, we propose a simple method of unsupervised feature construction based on pairwise statistics of features. In the first step, we construct neighborhoods of features by regrouping features that correlate. Then we use these subsets as filters to produce new neighborhood features. Next, we connect neighborhood features that correlate, and construct edge features by subtracting the correlated neighborhood features of each other. To validate the usefulness of the constructed features, we ran AdaBoost.MH on four multi-class classification problems. Our most significant result is a test error of 0.94% on MNIST with an algorithm which is essentially free of any image-specific priors. On CIFAR-10 our method is suboptimal compared to today's best deep learning techniques, nevertheless, we show that the proposed method outperforms not only boosting on the raw pixels, but also boosting on Haar filters.
[ "features", "construction", "neighborhood", "edge features", "neighborhood features", "abstract notion", "edge detector filters", "simple", "unsupervised feature construction", "pairwise statistics" ]
submitted, no decision
https://openreview.net/pdf?id=srkxraD5zAMCX
https://openreview.net/forum?id=srkxraD5zAMCX
ICLR.cc/2014/conference
2014
{ "note_id": [ "DDNVMlw3s76Jb", "qQc-FS6LJ1Fix", "Ftelj_S85VjH4", "_voA_q1WJavfX", "uurzmOgjnbmX1", "IILuhH0xutIc8" ], "note_type": [ "comment", "review", "review", "review", "comment", "review" ], "note_created": [ 1392611280000, 1391978580000, 1392611340000, 1391940420000, 1392611220000, 1391482500000 ], "note_signatures": [ [ "Balazs Kegl" ], [ "anonymous reviewer 1142" ], [ "Balazs Kegl" ], [ "anonymous reviewer 1cb9" ], [ "Balazs Kegl" ], [ "anonymous reviewer d9df" ] ], "structured_content_str": [ "{\"reply\": \"'Why not run something like a linear SVM on top of the constructed features?'\\n\\nYes, this could be done, although note that after 400K iterations (of 4-leaf trees), boosting is using all features several times, so SVM would be just another way to combine all the features. I chose boosting because we have a good 'in-house' implementation, because on raw pixels on MNIST it outperforms SVMs (1.25% vs 1.4%), and in general, our implementation of AdaBoost.MH is on par with SVMs (see my other submission to ICLR).\"}", "{\"title\": \"review of Correlation-based construction of neighborhood and edge features\", \"review\": \"In its present form, the paper proposes a very neat task-agnostic feature transformation trick that brings Adaboost nearly up to the start of the art on image classification tasks, which is quite a feat. The trick seems so far to bring top performance on tasks where there are known feature correlations (like pixels) but not on others.\\n\\nWhile it would be a nice workshop contribution in its present state, I think it requires some work to be a significant contribution to representation learning: some important aspects of the work are not properly explained, and experiments only use one algorithm (Adaboost.MH with hamming trees) and would need to be try other algorithms (SVMs, first layer of deep learning).\\n\\nThe authors take the traditional local smoothing and edge detection used for image processing, but remove all knowledge that pixels are adjacent. Thus the smoothing can be done on any set of pixels provided they are similar (measured through correlation). The correlation is measured again between smoothed pixels to determine edges. I assume this is original, as in its exhaustive form, one would probably discard the method as computationally not practical (I assume it scales as O(N_example*N_features^2)).\\nTo my opinion, an important contribution is the use of boosting in an auto-associative setting to select a subset of pixels that is used to select the neighborhoods, however, this appears in the paper only as a speedup trick. From an image processing viewpoint, what are these most predictive pixels picked up by Adaboost? This actually a very neat representation learning trick I have not seen elsewhere.\"}", "{\"review\": \"'To my opinion, an important contribution is the use of boosting in an auto-associative setting to select a subset of pixels that is used to select the neighborhoods, however, this appears in the paper only as a speedup trick. From an image processing viewpoint, what are these most predictive pixels picked up by Adaboost? This actually a very neat representation learning trick I have not seen elsewhere.'\\n\\nThe selected pixels (and channels) are in Figure 2. \\n\\nWe have experimented for a while with autoassociative boosting about six years ago to replace autoassociative neural nets, it worked nicely for the first layer but stacking never improved the error, so we dropped the line of research.\"}", "{\"title\": \"review of Correlation-based construction of neighborhood and edge features\", \"review\": \"The paper proposes a method for feature construction/augmentation based on grouping features that correlate, in an iterative/recursive fashion. These feature sets are validated using AdaBoost.MH and a few interesting results on MNIST and CIFAR are presented, as well as a few interesting negative results. The authors claim a relatively state of the art results on MNIST (without using image priors). While the CIFAR results are not as competitive with state of the art, they do improve on the boosting state of the art.\", \"the_method_is_relatively_simple\": \"group features that correlate, then connect neighborhood features that correlate and construct edge features by subtracting the correlated neighborhood features. The authors suggest this was inspired by biology/Haar/Gabor filters, but I am not sure this connection is particularly informative. A hybrid AdaBoost.MH with decision stumps (for picking the features to augment) with AdaBoost.MH + Hamming trees was run.\\n\\nOn MNIST this gives a 0.94% test error, which sounds like state-of-the-art(-ish). Interestingly, on CIFAR-10 this kind of approach beats boosting approaches on top of Harr filter features (though is not very close to state of the art!). For smaller and lower-dimensional datasets, the gains are relatively small -- the authors hypothesize that this is because the dimensionality is too low basically.\\n\\nWhy not run something like a linear SVM on top of the constructed features? I think it would be an interesting result too. Or even just a second-degree polynomial expansion of the feature-set found by the decision stump phase? Would make the results in this paper stronger, in my opinion: right now, it\\u2019s unclear if the extra gain is because of the clever selection of which features to combine or simply because features were combined at all.\"}", "{\"reply\": \"'I find the section Constructing the representation hard to follow. It would help to use a much more detailed description. Also, under 1. the term 'features' is used to refer to the learned representations as well as to pixels (second page, after 1.)',\\n\\nThis is sort of an intended terminology since 1) all these features (including pixels) are representations of the input, and 2) the procedure can be applied recursively, as in stacked autoencoders. I added the adjective 'raw' in some places where where it is clear that I am talking about pixels. I also added a footnote to make this choice explicit.\\n\\n'under 2. the same indexes are used on the two sides of the equal sign'\\n\\nCorrected. Thanks for spotting this.\\n\\n'and under 3. defining the symbol 'equal with triangle on top' would help.'\\n\\nIt was a shorthand for defining (a notation for) elements of the edge set, and I agree that it is confusing. I split the sentence and made the definition of the notation explicit.\\n\\n'The result of CIFAR-10 seems encouraging. Is this without image priors (that is, permutation invariant)?'\\n\\nYes, that's exactly the point.\\n\\n'As is, I find the paper quite hard to read, and it will be important to improve the clarity of the presentation.'\\n\\nPlease point out the exact problems and confusing notations/notions, I would be happy to take your concrete suggestions into account.\\n\\nI uploaded a new version to arxiv with the suggested corrections.\"}", "{\"title\": \"review of Correlation-based construction of neighborhood and edge features\", \"review\": \"The paper proposes a new method for constructing features by hand. Features are constructed as follows: First, one averages the input dimensions that are strongly correlated (for images these will be nearby pixels). Then one attaches to the result the differences between subsets of pairs of these features which are still highly correlated.\\n\\nThe beginning of the introduction is just a copy of the abstract. \\n\\nI find the section Constructing the representation hard to follow. It would help to use a much more detailed description. Also, under 1. the term 'features' is used to refer to the learned representations as well as to pixels (second page, after 1.), under 2. the same indexes are used on the two sides of the equal sign, and under 3. defining the symbol 'equal with triangle on top' would help.\\n\\nThe construction of the features reminded me of locally binary patterns LBP (unless I am missing something), so it feels like it would be good to compare to these.\\n\\nWhile 94 errors on MNIST seems decent, the method contains a huge number of hyperparameters (first level correlation threshold, second level correlation threshold, several boosting parameters, size and choice of random matrix to estimate correlations etc. etc. etc.).\\n\\nThe result of CIFAR-10 seems encouraging. Is this without image priors (that is, permutation invariant)?\\n\\nAs is, I find the paper quite hard to read, and it will be important to improve the clarity of the presentation.\"}" ] }
UVH3Ucewd-IXZ
Deep learning for neuroimaging: a validation study
[ "Sergey M. Plis", "Devon R. Hjelm", "Ruslan Salakhutdinov", "Vince D. Calhoun" ]
Deep learning methods have recently enjoyed a number of successes in the tasks of classification and representation learning. These tasks are very important for brain imaging and neuroscience discovery, making the methods attractive candidates for porting to a neuroimager's toolbox. Successes are, in part, explained by a great flexibility of deep learning models. This flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.
[ "neuroimaging", "successes", "tasks", "results", "deep learning", "number", "classification", "representation learning" ]
submitted, no decision
https://openreview.net/pdf?id=UVH3Ucewd-IXZ
https://openreview.net/forum?id=UVH3Ucewd-IXZ
ICLR.cc/2014/conference
2014
{ "note_id": [ "VVX0xVF70w4Cr", "GGNxGzRU2OzPC", "cUHZQQoWZzQ1D", "Toelaad_lZoit", "g5Jf9H1coI92C", "B2r9B231Rt2GZ", "0F8eF0qrnVL_s", "VslF-s_mF8-qC", "xNu3xNM93fI1d" ], "note_type": [ "review", "review", "comment", "comment", "review", "comment", "comment", "comment", "review" ], "note_created": [ 1392167520000, 1390080000000, 1390987800000, 1392822120000, 1392144300000, 1392822060000, 1390987800000, 1390990980000, 1392843720000 ], "note_signatures": [ [ "anonymous reviewer d143" ], [ "anonymous reviewer 0657" ], [ "Sergey Plis" ], [ "Sergey Plis" ], [ "anonymous reviewer 4ea9" ], [ "Sergey Plis" ], [ "Sergey Plis" ], [ "Sergey Plis" ], [ "Sergey Plis" ] ], "structured_content_str": [ "{\"title\": \"review of Deep learning for neuroimaging: a validation study\", \"review\": \"The English in the paper is poor making it hard to read.\\n\\nThere is a lot of material and the changing architectures, datasets and tasks make the whole thing hard to follow. \\n\\nYou introduce sMRI without explaining it. \\n\\nYou don't describe the temporal nature of your data. (I presume it's one volume every T ms, but you have to say that and tell us what T is. \\n\\nThe datasets are specialist and results aren't compared to other authors, so it's hard to really understand performance. \\n\\nThe task in 2.2 is not explained. What are the classes in the GT. is this really an unsupervised task? The results (what is being measured) in Fig 1c are not clear. Please elucidate. \\n\\nConventionally I believe a DBN is a stacked RBM - when you add a classification layer, please call it a DNN.\\nFor the RBM / DBN please specify the connectivity. Are inputs / hidden connected all:all?\\n\\n\\nBy adding a second layer you don't explicitly show that depth helps the improvement could equally be because adding more parameters helps. You need to show the correct control experiment. Adding a 3rd layer may similarly be causing you to overfit because you added too many parameters. \\n\\nYou don't explain the reasoning behind applying ICA spatially and the DBN temporally. \\n\\nIs the ICA implementation on a GPU too? \\n\\nYou say 'The training accuracy at the fine tuning stage for the 335 subjects was: 82%, 87%, and 86.5% respectively.'\\nWhat are these 3 numbers? You can't say respectively in this sentence.\"}", "{\"title\": \"review of Deep learning for neuroimaging: a validation study\", \"review\": \"The paper starts by describing the success of deep learning, which has been incredible, and from the very beginning sounds like the standard paper of 'let's try a successful technique from other areas in our problems.' While interesting, it has problems to start with, not all popular techniques should be applied to all problems.\\n\\nThe paper presents an interesting introduction, with philosophical components, and some I either don't understand or they are not accurate.\\nOracle is the theoretical limits of a problem, and if there is something we are fulling lacking is theory for deep learning (though some old one exists), so not clear what the authors mean.\\n\\nSecond, the authors should be aware of works on model compression that show that deep learning might actually not be needed and shallow nets can achieve almost same results, a paper on the topic even submitted to this conference.\\n\\nI am not an RBM expert, only familiar with the subject, so I read it but I am not judging its correctness.\\n\\nWas SM (Figure 1) ever defined? I can't find it even with my search function in the editor. This is critical.\\n\\nNot critical if when comparing with ICA, you are using the last techniques developed by Smith and his team and available in the public code from Oxford (double ICA, etc). So are comparisons with the state-of-the-art?\\nAlso, while is stated for Fig. 3 that RBM results are more supported by biology, no reference is providing to support this claim.\\n\\nThe multiplayer comparison in Section 3 makes no sense since parameters are significantly increasing when adding layers. The authors should look at work on model compression to see how these comparisons need to be made, and they might be surprised with the results, or maybe not, but increasing parameters by orders of magnitude is not fair.\\n\\nThe paper has major issues then, but the authors have done a lot of experimentation, and while under normal circumstances I would certainly recommend to reject, I think that their presence at the conference can lead to interesting discussions. The authors have not convinced me at all that their approach or deep networks in general is useful for their task, but I do believe in open discussions and I believe the community will benefit much more from the science that might come out from them defending their work at the meeting than from us rejecting outright the paper. Hopefully after the meeting we might be able to conclude if this is a direction worth investigating or not.\"}", "{\"reply\": \"Thank you very much for your time and effort put in reviewing our article. Apologies for a late response. oppenreview.net did not send a notification of the comment. I appreciate your comments some of which are addressed below. While the techniques addressed are indeed 'popular' in the areas where classification and representation learning are important (as in neuroimaging) it seems unwise to ignore their success just because many people are using them. Instead, our paper focuses on validating deep learning within a range of tasks that are 'very important for brain imaging and neuroscience discovery'. Our results, we believe, speak for themselves. Specific concerns that were raised are addressed below.\\n\\nNote that we use the term 'oracle' in the sense of a person (or a device) giving correct answers without explanation of how to get them without implying or requiring any theory.\\n\\nThank you for mentioning another interesting area that may fair better against deep learning and needs validation as well. Are you referring to 'Do Deep Nets Really Need to be Deep?' submitted to the current conference? I will definitely need a closer look at the paper, but however promising it seems that the proposed method still requires outputs of a trained deep net and results presented in our paper will be of great use for neuroimaging practitioners.\\n\\nThank you for catching our failure to define SM as spatial map in the paper. This omission has been corrected but we will wait for other reviewers to comment before uploading the new version.\\n\\nWe have performed comparisons with the most widely used ICA approaches and several non-ICA approaches that are not as popular but have been applied in neuroimaging as well.\\n\\nIt is unclear why would one worry about parameter increase if it does not lead to overfitting. In our case (i.e. Table 1) no overfitting is observed as we report cross-validation results on hold out data. From the literature we know that when the deep learning community claims performance increase with depth they exactly mean testing performance (and this claim we have validated positively for neuroimaging data). It is clear training data will only be overfit with more parameters, but hold out data does not have this problem. Note also, parameter increase is linear in the number of layers. For example for our 50-50-100 networks we have 50*60465 parameters for depth 1 net, 50*60465 + 50*50 for depth 2 net and 50*60465 + 50*50 + 50*100 for depth 3. This is 0.08% (sic!) increase in parameters from depth 1 to depth 2, 0.16% (sic!) from depth 2 to depth 3 and only 0.24% from depth 1 to depth 3. Yet classification performance increases by much more. I may have misunderstood your question, but it does not seem to me there is a problem here.\"}", "{\"reply\": \"Thank you for the effort you have put in reviewing our paper and the useful comments that lead to multiple changes in the manuscript.\\n\\nAs you have noticed classification experiments do suffer from the double dipping, which we used as admitted in the previous version for the reasons of computational efficiency. To address this concern for Section 3.3 we have completely redone the experiment to evaluate the performance via a 10-fold cross validation. The paper has been changed accordingly. Besides that we have added the k-nearest neighbor classification results to Section 3.3. Now the performance numbers more accurately reflect the truth as we only report them on the hold out test data in Table 1. Unfortunately, the computation time has delayed our response to reviewers' comments.\\n\\nIn Section 3.4 (3.3 in your review) our goal is indeed as stated in the first paragraph: investigate if deep learning has potential in assisting discovery. We indeed deliberately train the DBN (pre-train and fine tune) and most likely overfit (as you have noted f1 score of 1) on the complete dataset. We're not testing here anymore abilities of the DBN to classify subjects correctly, but rather, as you've noted positively asses if deep learning can be used for goals beyond generalization accuracy. Thank you for pointing out the fact that similar structures in Sections 3.3 and 3.4 may lead a reader to expect the same findings (or refutations). To reinforce the differences of study 3.4 from 3.3 we have modified the text and stressed the point that we do overfit (possibly) but generalization is nor the goal or the metric in this section. Please see the diff file for the changes we have made.\\n\\nThank you for requesting the details of the RBM training, as they will only make the paper more useful to a reader. We did, however, present the manipulated energy function for real-valued input in expression (1). Here is the information that we have added to the paper:\\n\\n1. Note, that in practical implementations if the activation nonlinearity belongs to the exponential family we need not worry about the changes in the energy as it is consistently described as a function of sufficient statistics of the model. See for example (Welling 2005) and reference [16] in our paper. As mentioned on page 4 (last line of Section 2.3) we use the following implementation for our experiments: https://github.com/nitishsrivastava/deepnet and consequently defer to all implementation decisions made thereof. The tanh, for example, is sampled from Bernoulli distribution where the value of failure is set to -1. If we were implementing the algorithm we could have chosen other sampling methods. For example, uniformly sampling in the [-1,1] range and taking arctanh of the samples.\\n\\n Welling, Max, Michal Rosen-Zvi, and Geoffrey E. Hinton. 'Exponential Family Harmoniums with an Application to Information Retrieval.' Nips. Vol. 17. 2004. APA\\n\\n2. The weights were updated using the truncated Gibbs sampling method called contrastive divergence with a single sampling step (CD-1).\\n\\n3. L1 regularization was used in the convential way for re-enforcing sparsity of the features, i.e. via an additive L1-norm regularizer on the wights W: +lambda ||W||_1.\\n\\nWe have referred to the paper by Le et al. in the modified version. Although we are familiar with this paper, the ICA method presented there is not a widely used one in the neuroimaging community and we were comparing to a more standard approach. The L1 penalty may describe the Fast ICA algorithm but is this true for ICA algorithms in general? For example, the Infomax algorithm we are comparing against (one of he most popular ones in neuroimaging) does not reply of L1 or any other explicit sparsity measures.\\n\\nWe have hand optimized the learning rate and sparsity parameters in an effort to maximize the convergence rate while keeping the optimization stable (grid search to the maximum stable value) and for sparsity we were looking for a parameter that achieves 30% sparsity (with the implicit sparsity setting as in the used L1 regularizer the right value is not 0.7). We have modified the paper to reflect this. The network sizes in the DBN experiments were chosen to balance computational complexity and representational capacity. From RBM experiments we have learned that even with a larger number of hidden units (72, 128 and 512) RBM tends to only keep around 50 features driving the rest to zero. That is how the number of hidden units in the first layers was selected. Classification rate and reconstruction error still improved a little when increasing the number of features, this lead us to increase the number of hidden units in the third layer.\\n\\nWe have added a supplementary material section to support the claims that we've made but haven't previously included the supporting data to keep the paper at a reasonable size. We there show the improved block structure and actual test of the modularity supporting one of the claims you've pointed to. We rephrased the paper to highlight the subjective measure of the improved locality of RBM features, and included a longer list of these features in the supplement to support our observation.\\n\\nThe footnote was removed.\", \"note\": \"besides the updated version on arxiv we have placed the version with all changes highlighted at http://cs.unm.edu/~pliz/diff.pdf\"}", "{\"title\": \"review of Deep learning for neuroimaging: a validation study\", \"review\": \"Summary:\\n\\nThe paper applies deep learning methods to various problems in MRI data analysis, showing that DBNs can recover similar features as other standard techniques, and can possibly improve classification performance in certain tasks.\", \"major_comments\": \"The classification experiments in this paper are hard to follow, and as written it seems they may be flawed:\\n\\nFor the results in Section 3.2, it sounds like the models may possibly have been pretrained on the testing data, and worse, possibly fine-tuned on the testing data as well. In particular, the model is pretrained and fine-tuned on 335 of 389 examples, and then, subsequently, the top layer activations of these fine-tuned models are supplied to a new shallow classifier that is trained on a different train/test split of half the 389 subjects in each. Each of these splits will thus necessarily contain many examples from the training set used for pretraining/fine-tuning. This means there will be many test examples on which the underlying DBN model has already been fine-tuned with the correct label. Hence the performance numbers are inaccurate. \\n\\nFor the results in Section 3.3, all data examples are used for pretraining/finetuning, and hence it is not clear to what extent the learned classifiers will generalize to new examples. This is particularly important to check given that the three layer network attains perfect accuracy, which may indicate overfitting. These concerns are moderated by the fact that dimensionality reduction applied to the learned representation shows an interesting structuring by disease severity, and this data was not used in the training process. I also recognize that, in this data analysis application, there are other important goals beyond generalization accuracy. If the trained model identifies particular features that are interpretable in the light of other known data, that can be very helpful. Drawing out these other uses of DBNs is an interesting direction to pursue.\\n\\nMore details of the RBM model and training procedure should be included in the paper. RBMs typically use 0-1 valued inputs, and the sigmoidal form of the activation function arises by manipulating the energy function. Some details which would strengthen the paper: How was the switch to Tanh activations performed? Were weights updated using contrastive divergence to approximately maximize log likelihood? If so, how many sampling steps were used (e.g., CD-1, CD-10)? Was sampling used, or were the updates based on the mean field approximation? How were tanh units sampled from? How was L1 regularization added? E.g., was it ||tanh(Wx)||_1, or ||sigmoid(Wx)||_1 or ||Wx||_1? \\n\\nDepending on how the switch to tanh units was performed, the RBM algorithm can be nearly identical to ICA. See, for example, Q.V. Le, A. Karpenko, J. Ngiam, A.Y. Ng. ICA with Reconstruction Cost for Efficient Overcomplete Feature Learning. NIPS, 2011. This connection should be cited and discussed. In particular, the statement 'In general, RBM performed competitively with ICA, while providing--perhaps, not surprisingly due to the used L1 regularization--sharper and more localized features' may need some adjustment. L1 regularization is the objective to be minimized in ICA too--if anything, the difference is due to less stringent orthogonalization.\\n\\nSome of the given parameter values are confusing. The learning rate is given as epsilon = 0.08, but this is not in the 'workable range' of [1e-4, 1e-3]. Were these parameters hand optimized? How were network sizes chosen?\\n\\nSome claims are stated without providing the relevant data. E.g., 'Moreover, the block structure of the correlation matrix (not shown) of feature time courses provide a grouping that is more physiologically supported than that provided by ICA.' More discussion of this, maybe in a supplementary materials section after the main paper, would be welcome. It is also stated that RBM bases are more 'spatially concentrated' but this is not quantified or clearly established in the presented data.\\n\\nThe text is often unclear and hard to follow (particularly section 3.2).\\n\\nPros/cons:\\n+ Interesting new application area for DL\\n+ Some results that hold promise\\n\\n- Flawed experiments mix train/test data\\n- Insufficient details of model and training method\", \"minor_comments\": \"The first page footnote should be removed\"}", "{\"reply\": \"Thank you for your comments and suggestions. We have heavily modified the paper which should have covered most if not all of your concerns. However, multiple datasets, architectures and concepts are still there as all of them are necessary for our goal of validating an approach in a field that has not used it before. As our goal is to evaluate architectures from deep learning area we use three: RBM, DBM and, as you've mentioned, DNN. To be able to generalize our findings we have applied the approaches to three datasets that are collected at multiple sites and are both static (sMRI) and dynamic (fMRI) in nature.\\n\\nsMRI as well as fMRI were introduced in Section 1. We now have also added a couple of sentences to explain what the data represents.\\n\\nIn Section 2.3 we briefly state that we have 249 scans/volumes per subject. We have also added, following your advice, the information on the sampling rate (or time of response (TR) in the literature).\\n\\nIndeed, by the nature of our work the data is highly domain specific. However, we compare to the state of the art in the neuroimaging field (ICA) when investigating the RBM model. The goals of investigating the deeper models do not require external comparison as we investigate if a deep learning literature claim holds on our data type (Section 3.3) and if deep learning is able to facilitate discovery (Section 3.4). Both questions are positively answered by our work and for this do not require a comparison.\\n\\nWe apologize for not stressing it enough, but the intro paragraph of Section 2 does state that the goal of this section is feature learning. This, as you have noted, is an unsupervised task. The caption of Figure 1c states that what shown is the average correlation to the ground truth for spatial maps, time courses and cross-correlations. Since this is a simulation study (the one in Section 2.2), we are able to measure these correlations.\\n\\nWe have modified the paper to emphasize that we are using the conventional RBM architecture for each layer where all units in a layer are connected to all units in the other layer. As for DNN, we feel it is more of a stylistic convention and it is implied and understood, that when DBN is used as a feed-forward network for classification it is essentially a neural network (a DNN). Having one extra acronym in an already acronym-heavy paper may confuse the reader.\\n\\nPlease see our response to reviewer 0657 above, where we explain why adding parameters is in itself not a problem if it does not lead to overfitting. As our new experiments in Section 3.3 demonstrate the model does not overfit with the increasing depth as classification on hold out data only improves. We feel this is the correct control experiment you are asking for. Note, that Section 3.4 pursues a very different goal and overfitting is orthogonal to that goal, we neither care nor even measure it there (see updated version of the manuscript).\", \"as_a_matrix_factorization_method_ica_simultaneously_estimates_two_factors\": \"the mixing matrix and the independent components. Conventional and widely used ICA algorithms are applied to fMRI and MRI data spatially for the reasons of computational efficiency and model determination: a 60000 by 60000 mixing matrix for only a couple of hundreds samples long time courses is very hard to estimate and invert. Note, however, the paper by Le et. al cited by reviewer 4ea9 that is capable of being applied the way we apply RBM. However, most widely used ICA algorithms in neuroimaging are not. For RBM there is only a single set of parameters: the W. Activation sequences of the hidden units can be hardly treated as time courses. Thus, we've used the W as our spatial features, which also concurs with the usage in the image processing community where W form receptive fields.\\n\\nWe did not use a GPU implementation of ICA and noted so in the new version of the paper.\\n\\nThe questionable sentence was not included in the rewritten version of our manuscript.\", \"note\": \"besides the updated version on arxiv we have placed the version with all changes highlighted at http://cs.unm.edu/~pliz/diff.pdf\"}", "{\"reply\": \"Thank you very much for your time and effort put in reviewing our article. Apologies for a late response. oppenreview.net did not send a notification of the comment. I appreciate your comments some of which are addressed below. While the techniques addressed are indeed 'popular' in the areas where classification and representation learning are important (as in neuroimaging) it seems unwise to ignore their success just because many people are using them. Instead, our paper focuses on validating deep learning within a range of tasks that are 'very important for brain imaging and neuroscience discovery'. Our results, we believe, speak for themselves. Specific concerns that were raised are addressed below.\\n\\nNote that we use the term 'oracle' in the sense of a person (or a device) giving correct answers without explanation of how to get them without implying or requiring any theory.\\n\\nThank you for mentioning another interesting area that may fair better against deep learning and needs validation as well. Are you referring to 'Do Deep Nets Really Need to be Deep?' submitted to the current conference? I will definitely need a closer look at the paper, but however promising it seems that the proposed method still requires outputs of a trained deep net and results presented in our paper will be of great use for neuroimaging practitioners.\\n\\nThank you for catching our failure to define SM as spatial map in the paper. This omission has been corrected but we will wait for other reviewers to comment before uploading the new version.\\n\\nWe have performed comparisons with the most widely used ICA approaches and several non-ICA approaches that are not as popular but have been applied in neuroimaging as well.\\n\\nIt is unclear why would one worry about parameter increase if it does not lead to overfitting. In our case (i.e. Table 1) no overfitting is observed as we report cross-validation results on hold out data. From the literature we know that when the deep learning community claims performance increase with depth they exactly mean testing performance (and this claim we have validated positively for neuroimaging data). It is clear training data will only be overfit with more parameters, but hold out data does not have this problem. Note also, parameter increase is linear in the number of layers. For example for our 50-50-100 networks we have 50*60465 parameters for depth 1 net, 50*60465 + 50*50 for depth 2 net and 50*60465 + 50*50 + 50*100 for depth 3. This is 0.08% (sic!) increase in parameters from depth 1 to depth 2, 0.16% (sic!) from depth 2 to depth 3 and only 0.24% from depth 1 to depth 3. Yet classification performance increases by much more. I may have misunderstood your question, but it does not seem to me there is a problem here.\"}", "{\"reply\": \"Above two comments are identical, please do not bother reading both. I must have double-clicked the button.\"}", "{\"review\": \"Computation for Section 3.3 have just completed and I have uploaded the revision to arxiv with updated Table 1. Temporarily, while arxiv is processing the paper I am keeping the current version at http://cs.unm.edu/~pliz/iclr2014.pdf\"}" ] }
8G-o3Hm_Z43Cf
Learning Transformations for Classification Forests
[ "Qiang Qiu", "Guillermo Sapiro" ]
This work introduces a transformation-based learner model for classification forests. The weak learner at each split node plays a crucial role in a classification tree. We propose to optimize the splitting objective by learning a linear transformation on subspaces using nuclear norm as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same class, and, at the same time, maximizes the separation between different classes, thereby improving the performance of the split function. Theoretical and experimental results support the proposed framework.
[ "classification forests", "transformations", "work", "learner model", "weak learner", "split", "crucial role", "classification tree", "splitting objective", "linear transformation" ]
submitted, no decision
https://openreview.net/pdf?id=8G-o3Hm_Z43Cf
https://openreview.net/forum?id=8G-o3Hm_Z43Cf
ICLR.cc/2014/conference
2014
{ "note_id": [ "ee-Bvr_jOc49x", "JSwPJZLj9ASjN", "sHezsRlPzgm2S", "jjhlkbWNtA_8m" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1391507100000, 1391900340000, 1390989300000, 1391482140000 ], "note_signatures": [ [ "anonymous reviewer 508e" ], [ "Qiang Qiu" ], [ "anonymous reviewer 461d" ], [ "anonymous reviewer fd4c" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Transformations for Classification Forests\", \"review\": \"This paper studies a new split rule for classification trees, based on the distances to two linear subspaces constructed at each node. Subspaces are constructed by partitioning a data subset by class, and finding a linear transformation that attempts to concentrate each same-class space, while maximizing angle distance between the two (different-class) spaces. At test time, a left/right decision is made based on which subspace the test point is closer to (based on projected reconstruction). For multiclass problems, classes are randomly assigned at each node to one of the two subspaces. The method is supported by theoretical analysis, toy examples, simple image classification datasets, and a pixel labeling task.\\n\\nI'm not very knowledgeable in random forests, but this seems like a decent proposal to me, effectively integrating a linear subspace method into the decision rule.\\n\\nThe image classification tasks are on fairly simple datasets, but nonetheless may be good fits for the approach. One question I have is how dependent this method is on image alignment. The three datasets all have fairly well aligned images, potentially lending themselves to gains by integrating linear subspace methods (in pixel-space). Notably, the proposed method performs quite well on 15-Scenes -- I wonder if there may be any additional take-aways concerning alignment of the images in this dataset and how else this might be used in classification of such scene data.\\n\\nThe results from the kinect pixel labeling look pretty good, and strengthens the paper with an application different from classifying aligned images, but it would be nice to include a comparison to Denil et al (the dataset providers).\", \"pros\": [\"Method is well motivated and theoretically analyzed\", \"Demonstrated to be effective in several tasks\", \"Very well written\"], \"cons\": [\"Applications are somewhat simple\"], \"more_questions\": [\"Fig. 5: It would be interesting to put the 'Identity learner' (subspace assignment without the transformation T) in all 3 of these plots. I'd be particularly interested to know the effect of this step on MNIST, which according to Fig 6 seems reasonably separated in the original space as well.\", \"Pixel descriptors for kinect: I think they are depth differences from the center pixel obtained at 32 equally spaced points along circles at 3 different radii (8, 32, and 64), forming a 96-dim descriptor -- is this correct? I wasn't entirely sure based on the description.\", \"'All the images were resized to 16 x 16': It's ambiguous whether this was for just YaleB, or all the datasets?\"]}", "{\"review\": \"Thanks for all these valuable and insightful suggestions. In what follows our responses. All responses marked with an * have already been included in the updated version of the arXiv submission. The others are currently been investigated.\\n\\n- '...Does the framework easily extend to other classification techniques?'\\n\\n*Response: The learned transformation at each node reduces intra-class variation and increases inter-class separation. Thus, such learned (transformed) representation can potentially help in other classification tasks as well, e.g., subspace based methods [A]. We have incorporated such comment in the revised paper, see Sec.1.\\n\\n\\n- 'All dataset images used were scaled down to 16x16. How does this method perform on truly high-dimensional data?'; 'All the images were resized to 16 x 16': It's ambiguous whether this was for just YaleB, or all the datasets?; '... the feature space is very large ...'\\n\\n*Response: Such image resize is only applied to the Extend YaleB and MNIST, which gives a 256-dimensional feature. Given d-dimensional features, in this paper, we only focus on a d*d-sized square transformation matrix. A potential solution to handle high-dimensional data is to learn a 'fat' r*d-sized transformation matrix (r<<d) to compress the data while enhancing the discriminability, as discussed in [A]. This has been mentioned in the revised version, see Sec.2.4.\\n\\n\\n- 'Is there a reason that not all experiments were performed with each of the datasets?'; '... in all 3 of these plots.'\\n\\n*Response: We first compare as many learners in a tree context for accuracy and testing time; then we only compare with learners that are widely adopted for random forests. See revised version, Sec.3, for a clarification.\\n\\n\\n- '...timing results aren't too informative when reported for a single dataset.'\", \"response\": \"We adopted a gradient descent approach only for its efficiency and simplicity. It is probably not the best approach, and we leave it open for discussions and improvements.\\n\\n\\n- ' Sec 2.4: '... only involves matrix multiplication, which is virtually computationally free at the testing time.' This seems like a odd claim. If d is large, then these matrix multiplications are far from free.'\\n\\n*Response: Assume each learned transformation matrix is of size r*d (r=d in the paper, but r is allowed to be a constant much smaller than d). Each split test consists of two matrix multiplications of complexity O(r*d) using a sequential implementation. With the modern architecture, p processes are usually assigned to each multiplication, which reduces the complexity to O(r*d/p) plus some small overhead. Thus, each split test can be reduced to linear complexity. However, we agree 'virtually computationally free' is over-stressed, and we revised it to 'low computational complexity'.\\n\\n\\n- 'One question I have is how dependent this method is on image alignment.'\\n\\n*Response: The proposed method is robust to misalignment. One main objective of the learned transformation is to reduce the intra-class variations, which includes misalignment. As an example, in a separate work, we demonstrate the success in face recognition across poses using the learned transformation, see [A].\\n\\n\\n- 'Pixel descriptors for kinect ... forming a 96-dim descriptor -- is this correct? ... '\\n\\n*Response: Each pixel is represented using depth difference from its 96 equally spaced neighbors with radius 8, 32 and 64 repletely, forming a 288-dim descriptor. We have further clarified this in Sec.3.3.\\n\\n\\n[A] Q. Qiu and G. Sapiro, 'Learning transformations for clustering and classification', CoRR, vol. abs/1309.2074, 2013. http://arxiv.org/abs/1309.2074\"}", "{\"title\": \"review of Learning Transformations for Classification Forests\", \"review\": \"The manuscript presents a novel approach to random forests in which a linear discriminative transformation is learned at each split node in order to enhance class separation of the weak learners. The work offers a merger of existing ideas in an innovative way. The experimental results, which include synthetic and real-world datasets, adequately support the theoretical claims. Also noted is the comparison to other classification tree models, clearly differentiating the proposed scheme from prior state-of-the-art subspace learning methods. A core argument made is that the proposed method can perform better with fewer and more shallow trees than other popular, yet relatively simpler, classification tree methods.\", \"weaknesses\": \"(1) A single classifier type coupled with the linear transformation was described. It would be valuable if other classifier/transformation pairs were explored (e.g. SVNs). Does the framework easily extend to other classification techniques? \\n(2) All dataset images used were scaled down to 16x16. How does this method perform on truly high-dimensional data? \\n(3) Is there a reason that not all experiments were performed with each of the datasets?\\n(4) Timing and accuracy comparison results would be valuable if presented for additional datasets, thereby establishing the scalability properties of the algorithm; timing results aren't too informative when reported for a single dataset. \\n(5) Experiments showing the effect of noise/mislabeld examples would be interesting as the impact of such imperfections on this scheme is unclear.\\n(6) Section 3.2, Paragraph 2, Sentence 1: What is the reason for mentioning that classes arriving at a node are randomly split into two categories? Why is it being introduced here as a new source of randomness?\", \"minor_editorial_comments\": [\"Section 3.1, Paragraph 1, Sentence 1: Could be worded to flow better.\", \"Section 3.1, Paragraph 2, Sentence 2: '... the paper ...' ->\", \"'... this paper ...\", \"Section 3.1, Paragraph 2, Sentence 4: Missing word (termination 'criteria'?)\", \"Section 3.2, Paragraph 2, Sentence 2: Missing period.\", \"Section 3.2, Paragraph 2, Sentence 3: '... the paper ...' ->\", \"'... this paper ...\", \"Section 3.2, Paragraph 2, Sentence 5: '... in details ...' ->\", \"' ... in detail ...'\", \"Section 3.2, Paragraph 2, Sentence 1: Not clear that its actually the\", \"samples from the dataset that the referenced figure contains.\"]}", "{\"title\": \"review of Learning Transformations for Classification Forests\", \"review\": \"Summary\\n\\nThis paper introduces a new split node learner for use in classification trees. The learner is parameterized by a d x d dimensional matrix T (d is the input dimension) and two subspaces, one for each class in a binary classification problem. At test time, a data point is transformed by T and assigned to the class whose subspace is best able to approximate the transformed point (least L2 residual). For each split node, the transformation T is learned by (locally) minimizing a non-convex objective function that is a difference of nuclear norms of transformed data matrices. Theoretical justification is given for the objective, showing that if the objective is 0 then the transformed subspaces for the two classes is orthogonal.\\n\\nEmpirical results are shown on four datasets and indicate that the proposed method obtains good performance.\\n\\nNovelty and Quality\\n\\nTo my knowledge, the proposed split objective is novel. The paper is very well written and the method is explained in a clear manner. \\n \\nPros\\n\\n+ The split objective and learner is novel and well motivated.\\n+ Experimental performance is convincing compared to other split learners.\\n+ The theoretical justification for the method is compelling and the presentation is clear.\\n\\nCons / Questions for author feedback\", \"details_of_the_experimental_setup_are_missing_in_some_cases\": \"- What d-dimensional feature vectors are computed for each experiment? And what is d in each case? This is explained for the Kinect experiment, but not for the others.\\n - Sec. 3.1 \\u201cwe further evaluate the effect of randomness introduced by randomly dividing classes arriving at each split node into two categories.\\u201d I don\\u2019t follow how the experiment described here evaluates this source of randomness. This simply reports accuracy on the 15-scenes dataset. I was expected an experiment that controlled for this source of randomness.\\n - For the Kinect experiment, why did you use only test images (450 for training/50 for testing)? It would have been better to follow the setup in (Denil et al., 2013) so that the results in the paper are comparable to Denil et al.\\u2019s results.\\n\\nThe paper is missing a discussion of the drawbacks of the proposed approach. For example, decision trees with decision stumps are often applied to problems where the feature space is very large (or even infinite). How could the proposed approach be modified to work in these cases?\\n\\nDid you consider applying a difference of convex programming approach to (1) instead of gradient descent?\\n\\nSec 2.4: \\u201c... only involves matrix multiplication, which is virtually computationally free at the testing time.\\u201d This seems like a odd claim. If d is large, then these matrix multiplications are far from free.\"}" ] }
l-BU-GGdtAlmX
Generative NeuroEvolution for Deep Learning
[ "Phillip Verbancsics", "Josh Harguess" ]
An important goal for the machine learning (ML) community is to create approaches that can learn solutions with human-level capability. One domain where humans have held a significant advantage is visual processing. A significant approach to addressing this gap has been machine learning approaches that are inspired from the natural systems, such as artificial neural networks (ANNs), evolutionary computation (EC), and generative and developmental systems (GDS). Research into deep learning has demonstrated that such architectures can achieve performance competitive with humans on some visual tasks; however, these systems have been primarily trained through supervised and unsupervised learning algorithms. Alternatively, research is showing that evolution may have a significant role in the development of visual systems. Thus this paper investigates the role neuro-evolution (NE) can take in deep learning. In particular, the Hypercube-based NeuroEvolution of Augmenting Topologies is a NE approach that can effectively learn large neural structures by training an indirect encoding that compresses the ANN weight pattern as a function of geometry. The results show that HyperNEAT struggles with performing image classification by itself, but can be effective in training a feature extractor that other ML approaches can learn from. Thus NeuroEvolution combined with other ML methods provides an intriguing area of research that can replicate the processes in nature.
[ "research", "approaches", "humans", "deep learning", "neuroevolution", "generative neuroevolution", "deep", "important goal" ]
submitted, no decision
https://openreview.net/pdf?id=l-BU-GGdtAlmX
https://openreview.net/forum?id=l-BU-GGdtAlmX
ICLR.cc/2014/conference
2014
{ "note_id": [ "77QH7SSVP57bw", "D016DNjipxPiU", "2jciw0IMK9w1_", "WWcAWQxupa6B_", "WWaVUgKzVrRHT" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1392220560000, 1391824860000, 1391449260000, 1391824860000, 1391824860000 ], "note_signatures": [ [ "anonymous reviewer ea58" ], [ "anonymous reviewer 7541" ], [ "anonymous reviewer a613" ], [ "anonymous reviewer 7541" ], [ "anonymous reviewer 7541" ] ], "structured_content_str": [ "{\"title\": \"review of Generative NeuroEvolution for Deep Learning\", \"review\": \"Many straightforward variants of combinations of neuroevolution and backprop have been tried in the past, never with very convincing results. Unfortunately, the same holds true here. Don't get me wrong, I do not think the approach is in principle flawed, but in my opinion, very convincing, robust-to-outstanding performance is necessary for this type of paper to be allowed on a conference track like this. Sorry.\"}", "{\"title\": \"review of Generative NeuroEvolution for Deep Learning\", \"review\": \"The paper proposes to automatically learn the structure of a deep neural network using Neuro Evolution (NE) techniques. In particular the authors apply this technique called HyperNEAT (Hypercube based Neuro Evolution of Augmenting Topologies) to learn the structure of a deep convolutional network. This is achieved by learning an indirect encoding which encodes the weight pattern of the neural network. Once learnt, the final network is fine tuned using backpropagation. The technique is applied to the task of identifying hand written numbers in the MNIST dataset.\\n\\nIn deep learning community, in a number of papers it has been shown that the choice of architecture plays an important role in the performance of the final trained model on any task. However, the question of 'how to choose the right architecture' has not got much attention. In that sense this paper tries to address this important question by automatically learning the network architecture using Neuro Evolution techniques. Unfortunately however, I think the paper falls way short of making any significant contribution in that direction.\", \"novelty\": \"I think there are not a lot novel ideas presented in the paper. Its a straight-forward application of HyperNEAT to CNNs. There is no new technique/model being proposed here.\", \"quality_of_results\": \"First, I feel the MNIST dataset is beaten to death and claiming it to be a real world task is probably not true any more. Second, the results reported on MNIST are quite underwhelming. The best accuracy achieved is 92.1% whereas the state-of-the-art results are in the range of 99+%. I would expect that for a technique to be considered promising it should have performance at least in the ball-park of state-of-the-art, if not best.\", \"previous_work\": \"Since I'm not an expert in the area it is hard for me to point any (if there are so) missing references to the previous work in the field. That said a couple of claims by the authors are not true. For instance, references 30 and 31 do not provide the state-of-the-art.\", \"clarity_of_presentation\": \"I think the quality of the write-up is quite average. For instance, the authors could have explained the HyperNEAT architecture (section 2.2) much better, especially for people who are not experts in the area. I don't see the point of putting the HyperNEAT-LEO algorithm in section 2.2, since it is not used anywhere in the paper. Also the paper is littered with typos.\", \"summary\": \"Though the paper tries to address an important problem in deep learning research, I believe its falls short of delivering. The proposed approach is a straight-forward application of HyperNEAT, and the results on the standard dataset are quite poor. This to me suggests that some work needs to be done before one can consider it as a conference publication.\"}", "{\"title\": \"review of Generative NeuroEvolution for Deep Learning\", \"review\": \"Generative NeuroEvolution for Deep Learning\\nPhillip Verbancsics & Josh Harguess\", \"summary\": \"Hyperneat neuroevolution is applied to MNIST, then fine-tuned through backprop. Results are perhaps not that overwhelming in terms of accuracy, but interesting.\\n\\nLots of relevant work seems to be missing though. For example, earlier indirect encodings were proposed by the following works:\\n\\nA. Lindenmayer: Mathematical models for cellular interaction in development. In: J. Theoret. Biology. 18. 1968, 280-315.\\n\\nH. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4:461-476, 1990.\\n\\nC. Jacob , A. Lindenmayer , G. Rozenberg. Genetic L-System Programming, Parallel Problem Solving from Nature III, Lecture Notes in Computer Science, 1994\", \"a_universal_approach_to_indirect_encoding_based_on_a_universal_programming_language_for_encoding_weight_matrices_with_low_kolmogorov_complexity\": \"J. Schmidhuber. Discovering solutions with low Kolmogorov complexity and high generalization capability. In A. Prieditis and S. Russell, editors, Machine Learning: Proceedings of the Twelfth International Conference (ICML 1995), pages 488-496. Morgan Kaufmann Publishers, San Francisco, CA, 1995.\", \"p_3\": \"'auto-encoders' - I think the authors mean stacks of auto-encoders - here one can cite:\\n\\n[35] D. H. Ballard. Modular learning in neural networks. Proc. AAAI-87, Seattle, WA, p 279-284, 1987\", \"p_4\": \"'In this way, HyperNEAT acts as a reinforcement learning approach that determines the best features to extract for another machine learning approach to maximize performance on the task'\\n\\nSo it is like Evolino, where evolution is used to determine the best features to extract for another machine learning approach to maximize performance - what is the main difference to the present approach? See:\\n\\nJ. Schmidhuber, D. Wierstra, M. Gagliolo, F. Gomez. Training Recurrent Networks by Evolino. Neural Computation, 19(3): 757-779, 2007.\", \"p_7\": \"'NeuroEvolution approaches have been challenged in effectively training ANNs order of magnitude smaller than those found in nature'\\n\\nCompare, however, vision-based RNN controllers with a million weights evolved through Compressed Network Search:\\n\\nJ. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, 2013. Also see lots of earlier work on this: http://www.idsia.ch/~juergen/compressednetworksearch.html\", \"general_recommendation\": \"Interesting, but it should be made clear how this work is related to or goes beyond the previous work mentioned above.\"}", "{\"title\": \"review of Generative NeuroEvolution for Deep Learning\", \"review\": \"The paper proposes to automatically learn the structure of a deep neural network using Neuro Evolution (NE) techniques. In particular the authors apply this technique called HyperNEAT (Hypercube based Neuro Evolution of Augmenting Topologies) to learn the structure of a deep convolutional network. This is achieved by learning an indirect encoding which encodes the weight pattern of the neural network. Once learnt, the final network is fine tuned using backpropagation. The technique is applied to the task of identifying hand written numbers in the MNIST dataset.\\n\\nIn deep learning community, in a number of papers it has been shown that the choice of architecture plays an important role in the performance of the final trained model on any task. However, the question of 'how to choose the right architecture' has not got much attention. In that sense this paper tries to address this important question by automatically learning the network architecture using Neuro Evolution techniques. Unfortunately however, I think the paper falls way short of making any significant contribution in that direction.\", \"novelty\": \"I think there are not a lot novel ideas presented in the paper. Its a straight-forward application of HyperNEAT to CNNs. There is no new technique/model being proposed here.\", \"quality_of_results\": \"First, I feel the MNIST dataset is beaten to death and claiming it to be a real world task is probably not true any more. Second, the results reported on MNIST are quite underwhelming. The best accuracy achieved is 92.1% whereas the state-of-the-art results are in the range of 99+%. I would expect that for a technique to be considered promising it should have performance at least in the ball-park of state-of-the-art, if not best.\", \"previous_work\": \"Since I'm not an expert in the area it is hard for me to point any (if there are so) missing references to the previous work in the field. That said a couple of claims by the authors are not true. For instance, references 30 and 31 do not provide the state-of-the-art.\", \"clarity_of_presentation\": \"I think the quality of the write-up is quite average. For instance, the authors could have explained the HyperNEAT architecture (section 2.2) much better, especially for people who are not experts in the area. I don't see the point of putting the HyperNEAT-LEO algorithm in section 2.2, since it is not used anywhere in the paper. Also the paper is littered with typos.\", \"summary\": \"Though the paper tries to address an important problem in deep learning research, I believe its falls short of delivering. The proposed approach is a straight-forward application of HyperNEAT, and the results on the standard dataset are quite poor. This to me suggests that some work needs to be done before one can consider it as a conference publication.\"}", "{\"title\": \"review of Generative NeuroEvolution for Deep Learning\", \"review\": \"The paper proposes to automatically learn the structure of a deep neural network using Neuro Evolution (NE) techniques. In particular the authors apply this technique called HyperNEAT (Hypercube based Neuro Evolution of Augmenting Topologies) to learn the structure of a deep convolutional network. This is achieved by learning an indirect encoding which encodes the weight pattern of the neural network. Once learnt, the final network is fine tuned using backpropagation. The technique is applied to the task of identifying hand written numbers in the MNIST dataset.\\n\\nIn deep learning community, in a number of papers it has been shown that the choice of architecture plays an important role in the performance of the final trained model on any task. However, the question of 'how to choose the right architecture' has not got much attention. In that sense this paper tries to address this important question by automatically learning the network architecture using Neuro Evolution techniques. Unfortunately however, I think the paper falls way short of making any significant contribution in that direction.\", \"novelty\": \"I think there are not a lot novel ideas presented in the paper. Its a straight-forward application of HyperNEAT to CNNs. There is no new technique/model being proposed here.\", \"quality_of_results\": \"First, I feel the MNIST dataset is beaten to death and claiming it to be a real world task is probably not true any more. Second, the results reported on MNIST are quite underwhelming. The best accuracy achieved is 92.1% whereas the state-of-the-art results are in the range of 99+%. I would expect that for a technique to be considered promising it should have performance at least in the ball-park of state-of-the-art, if not best.\", \"previous_work\": \"Since I'm not an expert in the area it is hard for me to point any (if there are so) missing references to the previous work in the field. That said a couple of claims by the authors are not true. For instance, references 30 and 31 do not provide the state-of-the-art.\", \"clarity_of_presentation\": \"I think the quality of the write-up is quite average. For instance, the authors could have explained the HyperNEAT architecture (section 2.2) much better, especially for people who are not experts in the area. I don't see the point of putting the HyperNEAT-LEO algorithm in section 2.2, since it is not used anywhere in the paper. Also the paper is littered with typos.\", \"summary\": \"Though the paper tries to address an important problem in deep learning research, I believe its falls short of delivering. The proposed approach is a straight-forward application of HyperNEAT, and the results on the standard dataset are quite poor. This to me suggests that some work needs to be done before one can consider it as a conference publication.\"}" ] }
kklr_MTHMRQjG
Intriguing properties of neural networks
[ "Joan Bruna", "Christian Szegedy", "Ilya Sutskever", "Ian Goodfellow", "Wojciech Zaremba", "Rob Fergus", "Dumitru Erhan" ]
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. Specifically, we find that we can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
[ "properties", "neural networks", "network", "expressive models", "state", "art performance", "speech", "visual recognition tasks", "expressiveness" ]
submitted, no decision
https://openreview.net/pdf?id=kklr_MTHMRQjG
https://openreview.net/forum?id=kklr_MTHMRQjG
ICLR.cc/2014/conference
2014
{ "note_id": [ "wMNY-xV3Ynvm6", "96ke6XdvXB9V7", "2bvxbJp4JZ8qP", "xaMckCfnv4aZ3", "yy8AJx-qzi6X4", "OOXW6YANWnqfe", "QhdJh-ekJVsba", "GCrE7EB_-wGeF", "hN0KTJ7oaBI5j", "Wuj8W8v0yPuUg", "SbcPbVLdC8SJZ", "ngUW3c5usIgTl", "TTO717UY7e1EM", "22Gh2jLthJ7Bq", "44JXVXNDXf9Kw", "WV3EWEL7rqWzf", "W-bn2AaUnSf6Q", "CuniPDEmBQuTX", "rryGxGt0NShbP", "DGcr4rYqdQDI-", "v07zUlDhlOUpZ" ], "note_type": [ "comment", "comment", "comment", "review", "comment", "comment", "review", "comment", "review", "comment", "review", "comment", "review", "comment", "review", "comment", "review", "review", "review", "review", "review" ], "note_created": [ 1392276600000, 1392264240000, 1392686580000, 1389838020000, 1392277200000, 1392686520000, 1389991740000, 1392686040000, 1391906520000, 1392269280000, 1390512780000, 1392276180000, 1388047980000, 1392267180000, 1388839260000, 1392276660000, 1391858040000, 1402411440000, 1388207280000, 1392271740000, 1401426600000 ], "note_signatures": [ [ "Christian Szegedy" ], [ "Christian Szegedy" ], [ "Joan Bruna" ], [ "anonymous reviewer 887c" ], [ "Christian Szegedy" ], [ "Joan Bruna" ], [ "Rodrigo Benenson" ], [ "Joan Bruna" ], [ "anonymous reviewer 9edb" ], [ "Christian Szegedy" ], [ "abhishek sharma" ], [ "Christian Szegedy" ], [ "David Krueger" ], [ "Christian Szegedy" ], [ "David Krueger" ], [ "Christian Szegedy" ], [ "anonymous reviewer e8df" ], [ "walid saba" ], [ "Sam Bowman" ], [ "Christian Szegedy" ], [ "Bob Durrant" ] ], "structured_content_str": [ "{\"reply\": \"Here is a link to the uncompressed original image examples used for the imagenet experiments:\", \"http\": \"//goo.gl/huaGPb\"}", "{\"reply\": \"In 4.1 D(x, l) is is definied by the sentence: 'we denote one such x+r for an arbitrary minimizer by D(x, l)'\\n\\nI've added some explanation of the minimization approach. (penalty function method). In fact we aim for the optimal solution of the hard-constrained version which we could find for convex losses, however the neural networks are non-convex, so our adversarial examples might end up to be non-minimal in theory. \\n\\nI am about to upload an updated version of the paper fixing the issues you raised.\\n\\nI leave the defense of section 4.3 to Wojciech.\"}", "{\"reply\": \"I have rewritten section 4.3 trying to put emphasis on the purpose of the analysis and with further explanations. I hope you'll find the new version easier to follow. Thanks.\"}", "{\"title\": \"review of Intriguing properties of neural networks\", \"review\": \"The paper presents some empirical analysis of various types of neural network\\nwith regard to finding examples close to the training examples which are\\nmisclassified. It also proposes a technique to train on these perturbed\\nexamples, and they mention some improved results on MNIST (it would be nice\\nto see more focus on this method, rather than so much on the analysis).\", \"novelty\": \"it's very novel, I'd say.\", \"quality\": \"so-so. The experimental justification of the training method is quite\\n limited, and there are some flaws in the analysis in my opinion.\", \"pro\": \"The paper has an original idea.\", \"con\": \"There are various weaknesses in the analysis and the presentation.\\n The paper gives the impression of being a little half-baked. But I think it's\\n interesting enough to publish; the issues could probably be fixed without too\\n much trouble.\\n\\n\\n\\nDetailed comments are below.\\n\\n------\", \"typo_in_abstract\": \"contains of->contains\\nextend->extent\\n\\nWhen you say 'Our experiments show that ... properties.'... I don't think what\\nyou are talking about is really a scientific experiment. It's something\\nanecdotal and informal. A proper experiment would have human test subjects\\njudging the extent of semantic similarities of sets of images, where the tests\\nhad been chosen using different methods and the test subjects didn't know which\\nmethod the set of images had been chosen from. Until you do this and show\\nstatistical analysis of some kind, you can't really say anything for sure.\\nOf course, you can still report the anecdotal observations, just don't say they\\nare experiments.\\n\\nWhen you say 'this can never occur with smooth classifiers by their\\ndefinition'... I think your definition must be problematic. There will always\\nbe a classification boundary, and it will always be possible to find images\\nthat are close to that boundary. The only question is how easy is it to find\\nsuch images. Unless you quantify that somehow, and compare across different\\nmodeling strategies (e.g. the average distance you have to go to find\\nthe negative example), I don't think you can say very much.\\n\\nFig. 5: I think you can safely assume the reader knows what an ostrich is.\\n\\nYou might want to check that you don't have your description of hard negatives\\nbackwards (I'm not a computer vision expert, but something seems wrong).\\n\\ndisclassified->misclassified\\n\\nThe sentence 'The instability is expressed mathematically as...' is to me quite\\nproblematic. I think you are trying to formalize something that is really a little\\nvague, and not doing it quite right.\\nThe rest of the spectral analysis is interesting though, but with the stuff\\nregarding A(omega), I think it would be more useful if you tried to explain in\\nwords what is happening rather than force the reader to wade through notation.\\n\\nI don't think the comparison to the set of rational numbers is useful, given\\nthat we know the last hidden layer is a continuous function of the input\\n(and thus the sets we're dealing with are much more well-behaved than the\\nset of rational numbers).\\n\\nOverall, some of the analysis and explanation in the paper is a little problematic,\\nand sometimes over-stated, but the basic idea is I think quite interesting, and I think\\nyou should emphasize more your training method, which in my mind is probably more\\ninteresting than the analysis itself.\"}", "{\"reply\": \"Hi Abishek,\\n\\nI would reflect to your thoughts on the blind spot part.\\n\\nThanks a lot for the exciting insights on the relationship with the edge detectors. \\n\\nAccording to your intuition a low-pass filter should remove most of the hardness of the newly generated examples. A simple enough experiment I will put on top of my todo queue. \\n\\nBTW, as I mentioned the 'trainig with adversarial examples' section the adversarial examples seem to be most effective (for regularization) if they are generated for the higher layers. The notion of high-frequency noise is not well defined here. \\n\\nIn general, it seems that adversarial examples are a more general phenomenon than just vision. An exciting item of study is trying the for networks other than vision (e.g. speach and maybe language?)\"}", "{\"reply\": \"I shall address here the comments relating to section 4.3.\\nThe comment 'The sentence ... and not doing it quite right' is an opinion that we cannot contest factually, but we believe that a first step towards understanding these unstabilities might come from the mathematical analysis of additive stability. \\nI have rewritten the section trying to simplify the message and with more detailed explanations on what the equations mean. I hope the analysis will become clearer in this new version.\"}", "{\"review\": \"Keeping up the 'let us catch typos' game, I point out:\\n\\n- 'we perfomed' -> 'we performed'\\n\\nOn a higher level I have little to add to reviewer 887c, I have the same overall impression of very interesting discovery, but immature insights around it.\", \"three_improvements_i_might_suggest_are\": [\"'A subtle, but essential detail is'; the description that follows is unclear to me. If this is so essential, please describe in more detail.\", \"The experiments regarding 'using adversarial examples during training' are somewhat unclear. Is 1.2% obtained using weight decay and dropout, plus the adversarial examples ?\", \"The paragraph 'For space considerations, ...' is too long and hard to read. Some reformatting would be most welcome.\", \"Looking forward to see where this research direction leads us !\"]}", "{\"reply\": \"I shall address here the comments relating to section 4.3.\\n\\nI have rewritten the section to make it more accessible, it will be recently available in the arxiv. \\nI have also included missing notation and fixed a couple of typos. A contractive operator O (linear or nonlinear) has the property that ||Ox - Ox'|| <= ||x-x'|| and hence does not expand the distance between a pair of input points. I clarified this in the text.\"}", "{\"title\": \"review of Intriguing properties of neural networks\", \"review\": \"The paper highlights two counterintuitive properties of neural networks: (1) a tendency to encode factors of variation in multiple units rather than single units, and (2) an absence of local generalization. The first hypothesis is tested by finding a set of samples correlating most with certain directions in the feature space and visually inspecting their similarity. The second hypothesis is tested by generating 'adversarial samples' that are close neighbors to the data points but predicted of different class.\\n\\nI think that the first property makes a lot of sense in the context of sigmoidal networks, as the squashing effect of the sigmoid clearly encourages the representation to be redundantly encoded in a large number of neurons in order to preserve the principal components of the data. However, I am wondering whether these findings apply broadly or whether some modeling decision (e.g. sparsity, type of nonlinearity, competition between units, etc) do favor the alignment of independent components to the canonical coordinates.\\n\\nThe second property is very interesting and shows that imperceptible deviations from a data point in the input space can cause the neural network to change its prediction. This finding is likely to generate further research on the nature of this effect. In particular, I would find it interesting to study whether these adversarial samples are a degenerate effect caused by the limited capacity of the network or a more fundamental issue with the learning algorithm that would occur for networks of any size.\\n\\nAuthors provide several examples of adversarial samples for both the MNIST and the image datasets. However, I could not find whether these samples are typical (with distortions corresponding to the average minimum distortion) or whether they have been preselected to have low distortion.\"}", "{\"reply\": \"Thanks a lot!\\n\\nI will upload an updated paper soon. The 1.2% result used weight decay, but no dropout. (update also added in the paper.)\"}", "{\"review\": \"It is a nicely written and interesting find, but I think some of the unexplained anomalies and deviations from intuition about the behavior of a deep neural network discussed in the paper have some intuitive justification. For example -\\n'Moreover, it puts into question the notion that neural networks disentangle variation factors across coordinates.' I think this statement is not fully justified by the observation that even a random direction in the final layers output space can lead to some sort of semantic clustering of the inputs. For instance, lets plot a thousand people on weight vs height 2D graph. Not all, but a lot of straight lines emanating from the origin will have a clearly perceivable semantic meaning associated with them to a human, like a line with very little slope will have super skinny people, line with big slope will have fat people, something in between we will have fit people and so on. I am not saying that height and weight are totally independent but are fairly untangled features. This simply analogy sort of hints towards an intuitive explaination of observance of semantic features when using random directions in higher layer output space because this higher layers have a very rich bag of weakly untangled variations and their random combination can still provide us with strongly perceivable semantic features, because humans are really good at finding patterns in a collection of object.\\nRegarding the blind-spot observation, I think the paper sort of played around the well-known insensitivity of human visual system towards high-frequency structured noise in images. Whereas, for machine vision, high-frequency signals play a vital role, therefore, it is easy to make a drastic change for the machine while keeping it almost imperceptible for the humans. The other way round of this phenomenon ie drastic change to the human eye but no change to the machine is already in use as the basis of a very popular edge extractor, 'Edge detection with embedded confidence' by Peter Meer. And such behaviour is sort of expected from a deep network because the final layer neurons when projected back to the visual input nodes using Deconvolution networks are mostly edge-profiles of the objects and thus making small adversarial changes to them is expected to alter the final classification output. \\nThe use of this phenomenon in training is interesting, but an evaluation of the incorrect classifications on some datasets like Image-net to see if a small imperceptible change to the wrongly classified images can indeed alter the final label to the correct label of the image, will possibly be a way to improve the results of the network even further.\\nThe comments are general observations and my own thoughts after reading the paper and not intended towards any sort of review or feedback, scholarly or otherwise.\"}", "{\"reply\": \"You get the same qualitative effect if the number of classes is small.\\n\\nThe car example in the paper is performed with a binary classifier. Similar results were obtained on networks other than that of Quoc.\\n\\nIt would be interesting to study the quantitative dependence of the necessary distortion on the number of classes.\\n\\nContractive autoencoders were suggested by Yoshua Bengio in 2012, but I have not tried them. [Personal communication]\\n\\nIt is an interesting observation on the dependence of depth and necessary distortion. In general I would caution to try generalize such a limited set of examples. There was no additional regularization for the deeper model besides the autoencoder. My personal experiment log contains a somewhat larger selection of experiments I will put on my public page as appendix to the paper. Your statement seems to continue to hold there. Not sure about convolutional and other types of networks.\\nI have a few uneducated guesses why it might be the case in general, but probably my guesses as good as anybodies.\"}", "{\"review\": \"This paper appears 7 times on the list of papers.\\n\\n 'It suggests that it is the space, rather than the individual units, that contains of' \\nshould be \\n'...contains'\\n\\n'Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend'\\nshould be \\n'...extent'\\nalso, 'fairly discontinuous to a significant extent' sounds redundant to me.\\n\\nLooking forward to reading it!\"}", "{\"reply\": \"First of all, thanks a lot for the thorough reading of the paper and the lof of detailed helpful suggestions.\\n\\nVery soon I will update the paper to incorporate most of the suggestion.\\n\\nI agree with a lot of the criticisms and will make changes to improve our exposion, I would just like to dispute some of the terminogical points regarding the use of the word 'experiment'.\\n\\nI agree that the 'visually indistinguishable' part is not verified by a scientific double-blind experiment. However, what we present here is not some subtle statistical phenomenon, but an exceptionally clear effect that works in >99% of the cases. \\n\\nYes, it is an informal statement. However, so far nobody who bothered to examine the output of our counter-exemple generator for his/her network felt that there is a need for a clinical trial to verify this statement on human subjects. ;)\\n\\nI think it is justified to call the other statistical experiments as such. Yes, they are not medical experiments. The term here refers to mathematical experiments. It is common in the domain of computer vision to demonstrate a method by running computer experiments in controlled setting. Here we try to support cross-model and cross-data set generalization of the adversarial examples. Given the number of examples generated, their error on the randomly chosen data sets, we think that you could agree that the measured error-rates can't be attributed to random chance.\\n\\nIn general, the word 'experiment' refers to the whole process of creating the counterexample generator, running it with different networks and observe its effect. Which was a well defined, scientific process designed to refute or support certain hypotheses we had come up for the source of the observed qualitative effects.\\n\\nThere are a lot of open questions regarding the explanation of the observed effects and I don't want to pretend to understand the exact underlying reasons. I must have to wait for my coauthor Wojciech for clarifications or changes to the theoretical section.\"}", "{\"review\": \"Having read the paper, I have some more comments. I can only speak to my own experience, of course, and may be plain wrong about a few things. Many of my comments are minor edits.\\n\\nFirst, it seems to me that x_eps, r, and n_i are all used variously to refer to the 'adversarial shift' (i.e. middle columns of 5a,b). I don't see any reason not to standardize.\\n\\nIn the second paragraph of section 1, I would remove 'Moreover'. In the same paragraph, I did not know what 'up to a rotation of the space' meant.\", \"2nd_to_last_paragraph_of_section_1\": \"hyperparemeter -> hyperparameter\\n\\nIn section 3, e_i is undefined. Also, I would use different names for the different definitions of x'. I think the sentence before the second definition is not a proper sentence, as well. And there is an extraneous in ('random basis in for') in the sentence after. Some quotes point the wrong way as well.\\n\\nSection 4, paragraph 2: 'In other words...' I would add 'according to [2]' or something else that clarifies that this is the hypothesis represented in that paper, and not an assertion you are making.\", \"paragraph_4\": \"I think the statement 'this can never occur with smooth classifiers by their definition' is too strong. While the perturbations are imperceptible, they are not arbitrarily small. I think this statement would only be justified if you could always change the classification output by changing the intensity of one pixel by the minimum amount possible given your discretization.\", \"paragraph_5\": \"'finding adversarial' -> 'find adversarial'\", \"figure_5\": \"I think at least on larger example like this would be nice. The perturbations might be more perceptible with larger images.\\n\\nparagraph before 4.1: I'm not sure what it means 'the training set is then changed...' in general (eg paragraph before 4.3), I was not clear on what dataset (training/test/etc.) was used for what part of the procedure. \\n\\nIn 4.1, I don't think D(x,l) was ever defined. \\n\\nBased on the last line of 4.1, it appears that your optimization problem uses a soft constraint on the misclassification. The write-up suggests that you are forcing a misclassification (hard constraint).\", \"bottom_of_page_6\": \"'The columns of the table show the error (ratio of disclassified' -> '...(proportion of misclassified'. I'm not sure what it means: 'The last two rows are special'.\", \"table_3_caption\": \"what does it mean 'trained to study'?\\n\\nSection 4.3 I found confusing in general.\\nI don't know what 'max(0,x) is contractive' means. I would also specify the norm that you use for W_k in the next line (I assume operator norm). \\n\\nI found the equation for Wx hard to parse and I don't think w_{c,f} is defined. There are missing commas before and after the ...'s. \\n\\n'x_c denoted the c input feature image' -> 'x_c denoted the c-th input feature image' ?\\n\\nFinally, In the discussion:\\n'the set of adversarial negatives is of extremely low probabilities' - did you show this? \\n'...yet it is dense' - this I don't think makes sense mathematically in a discrete space. I would not use this terminology unless you can prove it using the formal definition.\"}", "{\"reply\": \"Thanks a lot!\"}", "{\"title\": \"review of Intriguing properties of neural networks\", \"review\": \"This paper studies the interesting case of showing where deep nets fail. It proposes a simple mechanism to demonstrate that even small (visually almost imperceptible distortion) to the training samples can cause drastic changes in recognition (ie, decision boundaries).\\n\\nThis is a fresh perspective and I applaud the authors for providing interesting diagnostic means to help to understand deep nets.\", \"a_few_comments\": \"(1) the paper actually does not state how to address the issue of deep nets learning very rugged decision boundaries: a few ideas come into mind, though. Using drop-out in training process might eliminate this problem? If # of classes is small, would this problem be less severe? (For example, the required distortion is large so one can detect ). \\n\\n(2) I could not follow the analysis in section 4.3 -- not clear where it leads to\\n\\n(3) Looking at Table 1/2: I notice that the required distortion is larger for more complex models. So perhaps this model needs to be better regularized? Have you tried contractive autoencoders?\"}", "{\"review\": \"Very interesting paper with very 'intriguing' results indeed.\\nI believe there are serious implications to these findings.\\nI wonder if this paper is in anyway related to this (https://moalquraishi.wordpress.com/2014/05/25/what-does-a-neural-network-actually-do/) post, where the main finding/claim is that depth in neural networks can be represented by additional nodes in the (single) hidden layer - which says that deep networks do not have a representational power over traditional NNs.\"}", "{\"review\": \"Very interesting stuff! While I try to think of something intelligent to say, here are a couple more typos from the running text:\\n\\n'...e [remove many] s...'\\n\\n '...AlexNet [9].(Left) is correctly...'\"}", "{\"review\": \"All the examples shown are strictly random selections. They are not preselected in any ways. I will updated the paper to reflect this fact.\"}", "{\"review\": \"Hi Folks,\\nInteresting manuscript that I enjoyed reading, but I have some concerns about how you establish the conclusion that 'The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers, and so it is found near every\\nvirtually every test case.'\\nFWIW I think this is very likely the right conclusion, but I really don't think you show that.\\n\\nFilling in the gaps, I assume you did some sort of monte carlo using the images with Gaussian white noise, found few that were unclassifiable, and then concluded that the adversarial images must have low probability?\\nI think you should say that, if that is what you did, and give the number of trials and enough other detail to replicate the experiments. Otherwise I think you should give enough detail so that we can understand how you reached this conclusion - your assertion is so blunt I can't imagine it is just a conjecture, and I don't think I'm the only reader who would like to know how you reached this conclusion.\\n\\nBut that is not my main concern.\\n\\nA more serious problem, I think, is that you were probably looking in the wrong place. I can give you both a heuristic argument and a formal one for why I think this:\\n A heuristic argument for why you should believe this is that the adversarial images in the paper all look like the originals, while the noisy ones don't.\\nA formal argument would be that with n=784, adding Gaussian white noise with std dev sigma is essentially the same as shifting the image in a random direction by sqrt{784}*sigma - this follows from well-known results in measure concentration. Of course, that is not close to the original images or to the adversarial examples that you found.\\n\\nFortunately this is easy to correct however - take the images that you generated adversarial versions of (or a random subsample of them) and simply do monte carlo with Gaussian white noise added to the original image where sigma = Av. distortion of the adversarial version of that image. Then you are searching the sphere where the adversarial image lies and if there aren't many adversaries there, then you won't find them even with a whole bunch of MC trials.\\n\\n(Of course, if you do find them then that's even more intriguing - why should small distortions of the images become unclassifiable but large ones not...?)\"}" ] }
DQNsQf-UsoDBa
Spectral Networks and Locally Connected Networks on Graphs
[ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ]
Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with $O(1)$ parameters, resulting in efficient deep architectures.
[ "networks", "domain", "spectral networks", "graphs spectral networks", "efficient architectures", "image", "audio recognition tasks", "thanks", "ability" ]
submitted, no decision
https://openreview.net/pdf?id=DQNsQf-UsoDBa
https://openreview.net/forum?id=DQNsQf-UsoDBa
ICLR.cc/2014/conference
2014
{ "note_id": [ "u6OLucgDIBqmQ", "fswnf8ty0BxN9", "GGJ6GnTf2Wyzg", "lUMZl2vYZ3U0z", "L-XGLaMl2S-d0", "ydDkdMTKG91S8", "A8Ym7wXQmHSxz", "22NS2aRxzVJgy" ], "note_type": [ "comment", "review", "comment", "review", "review", "comment", "comment", "comment" ], "note_created": [ 1392955380000, 1391857200000, 1393371600000, 1392146760000, 1391834160000, 1392955380000, 1392767460000, 1393475520000 ], "note_signatures": [ [ "Joan Bruna" ], [ "anonymous reviewer 9f60" ], [ "Joan Bruna" ], [ "anonymous reviewer 9d8d" ], [ "anonymous reviewer ff10" ], [ "Joan Bruna" ], [ "Joan Bruna" ], [ "Olivier Delalleau" ] ], "structured_content_str": [ "{\"reply\": \"To all reviewers:\\n\\nWe just uploaded a new version of the paper on arxiv, which should be accessible in a few hours. \\n\\nThis version has been almost entirely rewritten to address the concerns about accessibility to audience not familiar with Harmonic Analysis, and taking into account the feedback from the reviewers. We have also included figures illustrating the construction and fixed the typos.\\n\\nIn particular, I would like to thank anonynous ff10 for her/his extensive and valuable comments, which greatly helped to increase the quality of the paper.\"}", "{\"title\": \"review of Spectral Networks and Locally Connected Networks on Graphs\", \"review\": \"> - A brief summary of the paper's contributions, in the context of prior work.\\n\\nExploiting the grid structure of different types of data (e.g., images) with convolutional neural networks has been essential to the recent breakthrough results in various pattern recognition tasks. This paper explores generalizing convolutional neural networks from grids to weighted graphs.\\n\\nThe reviewer finds weighted graph inputs best motivated by two datasets constructed at the end of the paper in order to test the proposed techniques. Both datasets are MNIST derivatives. The first subsamples the MNIST pixels in a disorganized manner, destroying the grid structure. The second projects MNIST onto a sphere, giving the input a more complicated manifold structure. Both of these are interpreted as weighted graphs in the natural manner. (A couple real-world examples of such structures occur to the reviewer: geo-spatial data and surfaces in 3D graphics.)\\n\\nIf we are persuaded that weighted graphs are an interesting type of input for a neural net, how can we generalize convolutional neural networks to them? The paper introduces two broad approaches. The first approach is to use a metric on the graph to define neighborhoods and build a locally-connected network.\\n\\nThe second, 'spectral,' approach is a bit more complicated. This can be understood as similar to how one can look at convolutional neural networks in terms of the Fourier Transform. A regular convolution can be thought of as pointwise multiplication in the Fourier domain. Drawing on the harmonic analysis of graphs, the paper uses the eigenvectors of the Laplacian, which have similar properties. Functions on the graph can be decomposed into coefficients of these eigenvectors and pointwise multiplied to achieve a convolution-like effect.\\n\\nAs mentioned previously, the authors test their techniques on two constructed datasets. For the subsampled MNIST, they are able to beat a fully-connected network with a locally-connected one, but only tie it with the spectral approach (though the spectral approach uses almost two orders of magnitude fewer parameters). For MNIST on a sphere, both approaches achieve slightly worse results than the fully-connected network (but, again, use fewer parameters).\\n\\n> - An assessment of novelty and quality.\\n\\nThe reviewer is not familiar with this area but believes this work to be novel.\\n\\nThe ideas in this paper seem quite deep, and the experiments performed are interesting. In fact, the constructed datasets alone are interesting.\\n\\nThe exposition of the paper could be a bit stronger. This seems somewhat more sensitive because most people in the neural networks community will not have the mathematical background the paper presently requires. A little more motivation and hand-holding could make the paper more accessible. That said, this doesn't seem like something that should be a barrier to publication.\\n\\n> - A list of pros and cons (reasons to accept/reject)\", \"pros\": [\"Generalizing convolutional neural networks to graphs seems like a valuable enterprise.\", \"Explores some very intriguing ideas. In particular, the spectral generalization of convolutional networks feels quite deep.\", \"Constructs cute datasets to test the ideas on.\"], \"cons\": [\"Paper could be more accessible (see above).\"]}", "{\"reply\": \"The new version of the paper is now available at http://arxiv.org/pdf/1312.6203v2.pdf\\n\\nJoan\"}", "{\"title\": \"review of Spectral Networks and Locally Connected Networks on Graphs\", \"review\": \"Spectral networks\\n\\nThis paper aims at applying convolutional neural networks to data which do not fit into the standard convolutional framework. They do so by considering that the coordinates lie on a graph and using the Laplacian of that graph.\\n\\nThe topic is of utmost interest as CNNs consistently achieve very high performance while keeping the number of parameters in the network. I am glad to see advances in that direction. This excitement is moderated by the extreme difficulty with which I read the paper. In fact, most of the paper assumes advanced notions of harmonic analysis which I do not possess. I fully understand that such notions are necessary to fully apprehend this work but I would have appreciated if the authors had provided pointers or tried to give intuition on the concepts. As it is, only people familiar with the field will capture the full gist of the method.\\n\\nAdditionally, I find the experimental section a bit weak, in great part because of the sole use of the ubiquitous MNIST dataset (albeit distorted versions of it).\\n\\nThat being said, I want this work to be disseminated so that CNN can be used in wider contexts.\", \"pros\": [\"Great extension of CNNs\", \"Results are based on profound understanding of harmonic analysis and not just trial and error\"], \"cons\": [\"Extremely difficult to read for an audience not familiar with harmonic analysis\", \"Experimental section a bit weak.\"]}", "{\"title\": \"review of Spectral Networks and Locally Connected Networks on Graphs\", \"review\": \"This paper investigates the construction of convolutional[-like] neural networks (CNNs) on data with a graph structure that is not a simple grid like 2D images. Two types of constructions are proposed to generalize CNNs: one in the spectral domain (of the graph Laplacian) and one in the spatial domain (based on a multi-scale clustering of the graph). Experiments on variants of the MNIST dataset show that such constructions can lead to networks much smaller than fully connected networks, with similar or better generalization abilities.\\n\\nOf the 4 papers I reviewed, this is definitely the one I spent the most time on, but also the one I understand the least. It is a very dense paper, with lots of interesting ideas and observations, but without detailed enough explanations in my opinion, thus making it pretty difficult to follow. I need to mention, though, that my knowledge of CNNs is limited to seeing a few times figures of LeNetX architectures, and hopefully people more familiar with CNNs will be able to better grasp the ideas presented here.\\n\\nMy first suggestion would be to start with the spatial construction (2.2) rather than the spectral one, as it is probably easier to visualize. And speaking of visualization, a picture showing the neighborhoods and how the various scales are used would be very helpful (I believe I understand what is being done here, but to be honest I think I need a picture to be sure).\\n\\nOn the spectral construction, if there is a way to put eq. 2.2/2.3 in pictures, it would be great as well. Something unclear about these equations is that we seem to keep only a given number of components at each step, but the definition of y_k does not show that. The main point I failed to understand here is what it means for the group structure to 'interact correctly with the Laplacian'. Unfortunately the example in 2.1.1 is not clear enough for me: instead of saying it recovers a 'standard convolutional net', could you describe the exact net structure, in particular in terms of weight sharing, pooling and subsampling layers? For 2.1.2 I do not really have any specific comment/question -- I was quite lost at this point -- except you could say what is a cubic spline kernel and why it makes sense to use it.\\n\\nFor experiments, please first say in intro of section 4 that the full description of the projected dataset will be given in 4.2. I read the intro of section 4 several times, trying to understand what it meant (it did not help that I understood 'the 2d unit sphere' as 'the unit sphere in 2d', ie a circle)... before finally giving up (I think I got the idea now after reading 4.2 and looking at the pictures, but the description is still confusing: what are e1, e2 and e3 and what is the motivation in the choice of their norms?).\", \"some_other_comments_on_experiments\": [\"I find the color maps hard to read. Would they look better in grey scale? (Fig. 4 (a)(b): how are we supposed to see it is the same feature?)\", \"Codenames for models in the results tables do not seem to be documented.\", \"In Fig. 2 is (a) really the finest? It looks like the coarsest.\", \"Overall, I do believe it is a paper worth publishing. Taking advantage of the inner (unknown) structure of input variables is definitely a direction that could bring substantial improvements, the early experiments presented here are encouraging, and it brings some new ideas to the able. I hope, however, that the authors can increase the readability of the paper by adding more figures and explanations for those less familiar with CNNs.\"], \"a_few_more_small_remarks\": [\"The O(1) used in the intro could be a bit misleading, I think it is more O(kd) with k the local neighborhood size and d the number of layers? (or O(d log d) if k decreases exponentially with d)\", \"'just an in the case of the grid': typo\", \"In 2.1.1 I am wondering how important is the assumption of equal variance, and if you coul use the correlation instead\", \"'Suppose have a real valued nonlinearity': typo\", \"'by a dropping a set number of coefficients': typo\", \"'The upshot is that the the construction': typo\", \"'navie choice': typo\", \"w_k-1 under eq. 2.4 should be W_k-1?\", \"'gauranteed': typo\", \"'the property that the subsampling the Fourier functions on the grid to a coarser grid': typo?\", \"Figure references in section 4 are messed up (all are Fig. 4.1)\", \"You could mention 'Learning the 2D topology of images' in the related work section\"]}", "{\"reply\": \"To all reviewers:\\n\\nWe just uploaded a new version of the paper on arxiv, which should be accessible in a few hours. \\n\\nThis version has been almost entirely rewritten to address the concerns about accessibility to audience not familiar with Harmonic Analysis, and taking into account the feedback from the reviewers. We have also included figures illustrating the construction and fixed the typos.\\n\\nIn particular, I would like to thank anonynous ff10 for her/his extensive and valuable comments, which greatly helped to increase the quality of the paper.\"}", "{\"reply\": \"We thank all the reviewers for their insightful and relevant comments.\\nWe are preparing a new version of the document where we will address the questions you raised. \\nWe will keep you informed.\"}", "{\"reply\": \"Thanks for the update. I will definitely check it out, but I probably won't be able to do it before the end of the 'official' discussion period (which I guess ends this week... actually I'm not sure)\"}" ] }
plS31K743MGWn
A Primal-Dual Method for Training Recurrent Neural Networks Constrained by the Echo-State Property
[ "Jianshu Chen", "Li Deng" ]
We present an architecture of a recurrent neural network (RNN) with a fully-connected deep neural network (DNN) as its feature extractor. The RNN is equipped with both causal temporal prediction and non-causal look-ahead, via auto-regression (AR) and moving-average (MA), respectively. The focus of this paper is a primal-dual training method that formulates the learning of the RNN as a formal optimization problem with an inequality constraint that guarantees stability of the network dynamics. Experimental results demonstrate the effectiveness of this new method, which achieves 18.86% phone recognition error on the TIMIT benchmark for the core test set. The results also show that the proposed primal-dual training method produces lower recognition errors than the popular RNN methods developed earlier based on the carefully tuned threshold parameter that heuristically prevents the gradient from exploding.
[ "rnn", "recurrent neural networks", "property", "training", "architecture", "recurrent neural network", "deep neural network", "dnn", "feature extractor", "causal temporal prediction" ]
submitted, no decision
https://openreview.net/pdf?id=plS31K743MGWn
https://openreview.net/forum?id=plS31K743MGWn
ICLR.cc/2014/conference
2014
{ "note_id": [ "kg59kX839cksb", "EHRL0DwvPfEBx", "PoYaL8FrgDN65", "77lR7U3aJP-jB", "Phz0PJ58do_fI", "dyoYR1N2YCRQu", "ttlj83NGjhko6", "gVSNgqxnTcVC-", "MgcjPCG7jsM-N", "1jsWz5vwWlzRt", "OA1iAwN-D5AL9", "gboKgWa1rzbml", "ZMlu0TaNxgHzF", "E6cFvWOmFRv42" ], "note_type": [ "comment", "comment", "review", "comment", "comment", "review", "comment", "review", "comment", "comment", "comment", "comment", "review", "comment" ], "note_created": [ 1392878700000, 1394190780000, 1392171780000, 1392878520000, 1392880080000, 1391821440000, 1394190780000, 1391862780000, 1394190780000, 1393554360000, 1393191000000, 1394190720000, 1391808960000, 1392878640000 ], "note_signatures": [ [ "Jianshu Chen" ], [ "Jianshu Chen" ], [ "Justin Bayer" ], [ "Jianshu Chen" ], [ "Jianshu Chen" ], [ "anonymous reviewer 3c88" ], [ "Jianshu Chen" ], [ "anonymous reviewer a863" ], [ "Jianshu Chen" ], [ "Jianshu Chen" ], [ "anonymous reviewer 3c88" ], [ "Jianshu Chen" ], [ "anonymous reviewer ce6f" ], [ "Jianshu Chen" ] ], "structured_content_str": [ "{\"reply\": \"Thank you for your review and kind feedback on our paper. In the revised manuscript, we will improve the presentation in Sec. 3 according to our suggestion. Also, we would like to clarify that the projection onto the set of ||W||_1<r with matrix L1-norm is not scaling it down when it exceeds the threshold. Instead, it is actually a soft-thresholding operation and the threshold has to be computed by solving a nonlinear equation (see page 188 of N. Parikh and S. Boyd, \\u201cProximal Algorithm\\u201d Foundations and Trends in Optimization.). Although the nonlinear equation is one-dimension and it can be solved by bi-section, it is still computational expensive.\"}", "{\"reply\": \"Just to let you know that an updated version incorporating your feedback has been posted on arxiv. Thanks again for your effort in reviewing the paper!\"}", "{\"review\": \"(Disclaimer: It is quite a while since I read this paper carefully and just wanted to put my comments here before the review phase closes.)\\n\\nI think that this work has several interesting data points for research on recurrent networks. \\n\\n(1) The regulariser is an idea which had to be tried. While it does not work well on raw signals the results with DNN features are in the leading pack, if I am correct (see suggestion below). \\n\\n(2) It is easy to say in hindsight that the neglected parameter volume is a reason for the performance. But interestingly, DNN preprocessing somehow can make up for this. I do not know of any work which shows static (i.e. time step wise) preprocessing to ease the long term dependency problems of RNNs.\\n\\nSuggestions/Questions:\\n- add a comparative table of TIMIT results \\n- try the method on data sets with long term dependencies (see [1]) to validate whether the method has detrimental effects, as one would expect.\\n- how well does a carefully trained plain RNN with no regularizer do on the DNN features?\\n\\n[1] I know there is a better reference by (Hochreiter & Schmidhuber), but this one has to work for now: Martens, James, and Ilya Sutskever. 'Learning recurrent neural networks with hessian-free optimization.' Proceedings of the 28th International Conference on Machine Learning (ICML-11). 2011.\"}", "{\"reply\": \"We would like to thank the reviewer for the kind review and feedback on the paper. We include our response to the major points of the review below:\\n\\n1.\\tIt is true that both ARMA and the bidirectional RNN use information from past and future. However, the bidirectional RNN uses information from future by letting its hidden layers depend on the hidden states at future. On the other hand, the ARMA is made to depend on future by letting its input include future. So these two methods uses information from future in two different ways and our ARMA method is much simpler and is equally effective.\\n\\n2.\\tThe \\u201ctandem\\u201d architecture mentioned by the reviewer --- concatenating the posterior output (i.e., the top layer) of a neural network with the MFCC features as new inputs to an HMM recognition --- was actually first proposed in 2000 (Hermansky, Hynek; Ellis, Daniel P. W. and Sharma, Sangita \\u201cTandem connectionist feature stream extraction for conventional GMM-HMM systems,\\u201d ICASSP, 2000, much earlier than 2012, which added also the bottleneck features. We exploit the DNN features in a much different way. First, we take the hidden layer of the DNN as the features, which are shown experimentally in our work to be much better than the top-layer posterior outputs as in the standard \\u201ctandem\\u201d method. Second, the hidden layer of the DNN is shown experimentally in our work to be much better features than the hidden layers below (similar to the bottleneck features). Third, rather than using the GMM-HMM as the separate sequence classifier, we use the RNN as the sequence classifier.\\n\\n3.\\tThe main motivation of the paper is to propose a method to train RNN in a principled. We just use sigmoid neuron as an example. It can also be extended to the ReLU case.\\n\\n4.\\tWe will add the reference suggested by the reviewer in the revised manuscript and also other revisions suggested by the reviewer.\"}", "{\"reply\": \"Thank you for your reading of our paper and useful comments. We will take your feedback into revised manuscript. Also, we tested the plain RNN training algorithm with traditional gradient clipping technique on TIMIT with the same DNN features. The best phone error rate on the test set is found to be between 19.05%-20.5% over a wide range of the threshold values where the best tuned clipping threshold is around 1.0 which corresponds to the error rate of 19.05%. This is higher than the 18.91% from our primal-dual method. Thus, using the new method presented in the paper, we do not need to tune the hyper-parameter of clipping threshold while obtaining lower errors. In addition, we also observe that the proposed primal-dual BPPTT converges faster than the traditional BPTT with gradient clipping. And an important observation from this comparison is that by imposing such a constraint on the recurrent weights, it does not restrict the performance of RNN at least on TIMIT dataset. We will test our algorithm on the dataset you suggested for the long-term dependency performance in future work. Again, we appreciate your feedback on our paper.\"}", "{\"title\": \"review of A Primal-Dual Method for Training Recurrent Neural Networks Constrained by the Echo-State Property\", \"review\": \"Summary:\\n------------\\n\\nThe paper introduces a primal-dual method for training RNNs as well as a few other structural changes to the base model. The algorithm is tested on the TIMIT dataset.\", \"comments\": \"--------------\\n\\n I think my main observation about the paper is that it misinterprets the exploding gradient problem/vanishing gradient problem as well as the echo state property.\\n\\nSimply put, comparing the norm of the recurrent weight with 1 (for tanh) just gives either a necessary condition for gradients to explode. That means that there is a large volume of parameter configuration that do not satisfy this condition and yet the gradients do not explode.\\nThis constraint, in some sense, over-restricts the capacity of the model. It is true that the empirical evidence shows that the model still performs well (arguably better than the unconstrained one) but this is just a data point (a single dataset) and is hard to draw conclusion such as the new method has `superior performance`. It is far from clear to me that this is true, I would argue that by excluding that large volume of possible values, with some positive probability this new learning algorithm is inferior on several tasks.\\nThe echo state property is also an approximation of what exactly we want from the model, as most of the analysis of recurrent networks. Basically the echo state property assumes that if **no input** is presented the, now, dynamical system will converge to a point attractor at the origin. In reality we care about the behavior of the model in the presence of input. This is mathematically much more difficult to analyze though there is some effort done e.g.Manjunath, G., Jaeger, H. (2013): Echo State Property Linked to an Input: Exploring a Fundamental Characteristic of Recurrent Neural Networks.\\n\\nFurther more, I would argue that the model trained with the primal-dual method will suffer a lot more from the vanishing gradient problem and it is potentially one of the crucial factor that makes the ARMA variant outperform the other variations (as it explicitly uses a time window)\\n\\n\\nOther comments\\n--------------------\\n(1) I feel wordings like 'superior performance', 'demonstrate the effectiveness' can be misleading. Providing the number 18.86% in the abstract, without a point of comparison, is also not very useful.\\n(2) I'm not sure I understand why stacked RNN suffer from an overfitting problem ? And would there be a reason, if they do suffer, for this overfitting to not be addressed by say weight noise or some other regularization ?\\n (3) Eq. (10). You do not want to take an average over the sequence length. That will bias you towards short sequences\\n(4) Most of the equation on page 5 are not very useful (showing the cost and gradients for softmax and cross-entropy vs linear units and square error). In general I feel there are more equation then necessary, making the text harder to read\\n(6) Minimizing the sum of each column is not the same as minimizing only the column with the maximal sum. You are putting pressure on certain columns even when they are not the maximal one.\\n(7) There are hardly any details about the experiments that you run.\"}", "{\"reply\": \"Just to let you know that an updated version incorporating your feedback has been posted on arxiv. Thanks again for your effort in reviewing the paper!\"}", "{\"title\": \"review of A Primal-Dual Method for Training Recurrent Neural Networks Constrained by the Echo-State Property\", \"review\": \"This work presents a method for training RNNs that achieves good results on TIMIT. The method applies\\na deep recurrent neural network similar to Graves et al. (ICASSP 2013) and achieves good results on TIMIT. The main\\nnovelty here is the introduction of a new training method which enforces a constraint of a certain\\neasily-computable matirx norm using lagrange multipliers. As a result, the network improves its\\nperformance from 19.05% to 18.86%. The work also introduces a number of RNN variants that get their\\ninformation from a fairly wide range of frames. \\n\\nThe main idea is related to previous analyses of the exploding gradient problem. The approach taken\\nby this work is to force the RNN's weights to be small at all times, thus ensuring that the RNN\\nnever has exploding gradinets. \\n\\nThere are several weaknesses in the paper. First, it is quite verbose, spending pages on standard\\ndefinitions of RNNs and derivations of their learning rules. Second, the paper chose to use\\nlagrange multipliers, with another lengthy derivation, but did not compare with the much simpler\\nprojected gradient descent, where we simply shrink any weights that become too large (i.e.,\\nif ||w||_1 is too large, scale it down until it is of the right size). And third, the improvement\\nover previous work is quite small.\"}", "{\"reply\": \"Just to let you know that an updated version incorporating your feedback has been posted on arxiv. Thanks again for your effort in reviewing the paper!\"}", "{\"reply\": \"Thanks again for your further comments. For equation 10, typically, we choose a fixed number of backpropagation steps T (which is typically used in BPTT), so that this 1/T will be absorbed into the step-size. That is, if we do gradient descent on a function J = 1/T J1, we will have w_t = w_{t-1} - mu*(grad J) = w_{t-1} - mu*(grad 1/T J1) = w_{t-1} - (mu/T) grad J1. In other words, we are effectively using a step-size of mu/T to J1 that is the sum instead of average. Of course, I agree with you that if T changes over different training sequences, then it will make a difference.\\n\\nRegrading the claims and presentation of the paper, we will incorporate the changes into the paper so that the contribution of the paper is re-stated as we discussed. We will update the paper arxiv soon. Also, in future work, we will do experiments on the synthetic data too to further check the behavior of the algorithm there.\"}", "{\"reply\": \"I am still confused about equation 10. I've checked the reference you pointed to, but I could not find the division by T in that paper (e.g. eq. 3 in that paper is just a sum). I do feel that if you are running your model over sequences of different length, you are weighting the error done at each step differently (according to the length of the given sequence) which is not exactly what you want.\\n\\nRegarding point 1. I guess I agree with what you are doing to do, and from that perspective is fine. I would tend to agree that there might be a sufficiently large chunk of problems on which long term correlations are irrelevant and you only need RNNs that can deal with short term very well (which in my opinion is what you get when training a model under the constraint that the echo state property is respected). But I'm not finding the paper exactly stating that. You can see phrases like : 'Below, we describe a more rigorous and effective approach to learning RNNs developed in this work' on page 5 bottom, just before you start introducing the new method. In the conclusion you say: 'Fourth, we show experimentally that\\nthe new training method motivated by rigorous optimization methodology performs better than the previous methods Mikolov et al. (2011); Pascanu et al. (2013) of learning RNNs using heuristical\\nrules of truncating gradients during the BPTT procedure.' To me this states that the new method is superior overall compared to other techniques. I'm saying this is probably not true, since you drastically reduce the family of models you are allowing yourself to visit during training. And more exactly you are reducing yourself to those kind of models that can not deal with long term information very well. I might be wrong but I do not see any evidence in the paper against this. E.g. could a model trained this well solve the synthetic datasets from 'On the difficulty of training RNNs' ? Can you get as good solutions as those there? If not, you should use a more mild language in your claims. If not for any other reason, just because it will probably confuse future readers, expecting some different from this method.\"}", "{\"reply\": \"Just to let you know that an updated version incorporating your feedback has been posted on arxiv. Thanks again for your effort in reviewing the paper!\"}", "{\"title\": \"review of A Primal-Dual Method for Training Recurrent Neural Networks Constrained by the Echo-State Property\", \"review\": \"Pro:\\n\\u2022\\tThe concept of using AR/ARMA for RNN training, and formulating this as a convex optimization problem, and interesting.\", \"cons\": \"\\u2022\\tA new idea (AR/ARMA) is introduced as yet another technique to deal with the vanish gradient problems when training RNNs. You touch in the related work on how this is different than other ideas in this space. I would have liked to see empirical comparisons in the experimental section, at least with a few of the ideas (i.e. Martens\\u2019 HF approach) to better understand the value of this technique.\\n\\u2022\\tARMA seems similar to the bidirectional RNN proposed by Graves (ICASSP 2013) and experiments or discussion comparing this should be done.\\n\\u2022\\tThe experimental results are not convincing. \\no\\tDNNs trained with filter-banks on TIMIT are around 20% (see Abdel-rahman Mohamed, George Dahl, Geoffrey Hinton, 'Acoustic Modeling using Deep Belief Networks'. Accepted for publication in IEEE Trans. on Audio, Speech and Language Processing) while your RNN results are around 28%\\no\\tIt seems that most of the gains you are getting are by using DNN-based features as input in to the RNN. The idea of extracting features from the DNN and then training another network is not new in speech recognition (see Z. T\\u00fcske, M. Sundermeyer, R. Schl\\u00fcter, and H. Ney. \\u201cContext-Dependent MLPs for LVCSR: TANDEM, Hybrid or Both?\\u201d, in Proc. Interspeech, September 2012). What happens if you train your DNN in the same way, how would this compare to the RNN?\\n\\u2022\\tReferences could be improved, citing the most relevant papers in the field rather. Will indicate this in my comments below.\", \"here_are_my_comments_per_section\": \"\\u2022\\tPage 1, section 1: When describing papers that cut recognition errors for speech, the main papers are Hinton 2012, Dahl 2011 which you have. In addition, pls include F. Seide\\u2019s DNN SWB paper from Interspeech 2011, B. Kingbury\\u2019s HF paper from Interspeech 2012 and T. Sainath\\u2019s paper from ICSASSP 2013. These 3 papers showed the biggest impacts in reducing WER across many LVCSR tasks. Pls remove Deng2013b.\\n\\u2022\\tPage 1, section 1: Its not clear from the intro how AR/ARMA ideas address the vanishing gradient issues with RNNs discussed in para 1. Pls elaborate on this more\\n\\u2022\\tSection 2, page 2: Oriyal Vinayls RNN paper should be cited as well along with Maas, Graves, work.\\n\\u2022\\tSection 2, page 2: in describing DNNs ability to extract high-level features, Yann Lecun has a paper on this and should be cited: Y.LeCun,\\u201cLearningInvariantFeatureHierarchies,\\u201dinEuropeanCon- ference on Computer Vision (ECCV). 2012, vol. 7583 of Lecture Notes in Computer Science, pp. 496\\u2013505, Springer.\\n\\u2022\\tSection 3.2, page 3: Why did you use sigmoid instead of ReLU ,which has been shown extensively to work much better on TIMIT\\n\\u2022\\tSection 3.2, page 3: you are move eq (3) as you don\\u2019t use it in the paper\\n\\u2022\\tSection 3.2, page 3: The ARMA model, which looks into the future, related to the biodirectinal RNN proposed by Graves (ICASSP 2013). The motiviation seems similar and thus differences should be clarified.\\n\\u2022\\tSection 3.3, page 4: most of this is review from other papers and does not need 2 pages of explanation. Just summarize the main high level points so more of the paper can be focused in Section 4 on your novel contributions\\n\\u2022\\tSection 4, page 6: I\\u2019m not sure why you say RNNs require relatively short memory of just 30 frames. Oriyal Vinyals\\u2019 RNN paper shows that 60 is more reasonable. In addition, graves uses an LSTM framework which allows for more long-range dependencies which he shows is better than an RNN.\\n\\u2022\\tSection 5, page 9: The TIMIT paper to be cited is Kai Fu Lee\\u2019s paper which describes the standard protocol, not (Hintin 2012 and Deng 2013a): K. F. Lee and H. W. Hon, \\u201cSpeaker-independent Phone Recognition \\nUsing Hidden Markov Models,\\u201d IEEE Transacations on Acoustics, \\nSpeech and Signal Processing, vol. 37, pp. 1641\\u20131648, 1989 \\n\\u2022\\tSection 5, page 10: I\\u2019m not convinced by the experimental results, as indicated above. Further comparison should be done with other RNN training methods, as well as with stronger DNN baselines.\"}", "{\"reply\": \"Thank you for your kind feedback on the paper. Please find our response to your main comments below.\\n\\n1.\\tThe Echo-state property is a sufficient condition (instead of necessary condition) for avoiding exploding gradient. That is, once it is satisfied, there would be no exploding gradient problem. In this paper, our main point is that with such a sufficient condition/constraint, we can train RNN in a principled way that avoids the exploding gradient naturally and performs well on the TIMIT dataset. We are not claiming that this is the only correct way to train RNN. And of course, there could be possibly better approach that is able to exploit the parameter space that goes beyond this constraint. What we are trying to convey here is that, with such constraint, it can be relatively easy to train the RNN in a principled way, and also attain a satisfactory performance level.\\n\\n2.\\tRegarding the question \\u201cEq. (10). You do not want to take an average over the sequence length. That will bias you towards short sequences \\u201c, there might be a misunderstanding here. The cost in (10), either written in 1/T sum_{t=1}^T J_t(y_t,d_t) or sum_{t=1}^T J_t(y_t,d_t) are used to measure how good the predicted labels {y_1, y_2, \\u2026, y_T} matches the true labels {d_1, d_2, \\u2026, d_T} over t=1,\\u2026,T. This cost is a standard formulation for training RNN in other literatures too (e.g., R. Pascanu, T. Mikolov, Y. Bengio, \\u201cOn the difficulty of training recurrent neural networks\\u201d, 2013). And T here means the number of back propagation steps through time.\\n\\n3.\\tRegarding the question \\u201cMinimizing the sum of each column is not the same as minimizing only the column with the maximal sum. You are putting pressure on certain columns even when they are not the maximal one.\\u201d, we apologize for for the possible confusion. We would like to clarify that we are not minimizing the maximum sum of the columns. Instead, we are imposing constraint on the maximum sum of the columns (||W||_{inf} < gamma) when minimizing the negative cross entropy between the predicted labels and the true labels. And another thing we would like to clarify is that this is equivalent to imposing constraint on the sum of each column. In other words, max{s_1, \\u2026, s_N} < r \\u21d4 s_1<r, s_2<r, \\u2026 s_N<r, where s_k is the absolute sum of the k-th column. To see this, first we note that the left hand side implies the right-hand side immediately since the max of {s_1, \\u2026 s_N} being less than r means each of them will be less than r. Next, we can also see the right-hand side means the left-hand side because if each of {s_1,\\u2026, s_N} is less than r, then the maximum of them will also be less than r. In other words, these two conditions are equivalent. Moreover, this technique is a standard approach that is widely used in optimization (e.g., S. Boyd and L. Vandenberghe \\u201cConvex Optimization\\u201d, page 150-151).\"}" ] }
kkUZ1FHlLaPAf
Learning Paired-associate Images with An Unsupervised Deep Learning Architecture
[ "Ti Wang", "Daniel L. Silver" ]
This paper presents an unsupervised multi-modal learning system that learns associative representation from two input modalities (channels) such that input on one channel will correctly generate the associated response at the other channel and vice versa. In this way, the system develops a kind of supervised classification model meant to simulate aspects of human associative memory. The system uses a deep learning architecture (DLA) composed of two input/output channels formed from stacked Restricted Boltzmann Machines (RBM) and an associative memory network that combines the two channels. The DLA is trained on pairs of MNIST handwritten digit images to develop hierarchical features and associative representations that are able to reconstruct one image given its paired-associate. Experiments show that the multi-modal learning system generates models that are as accurate as back-propagation networks but with the advantage of unsupervised learning from either paired or non-paired training examples.
[ "channels", "images", "learning system", "channel", "system", "dla", "unsupervised", "associative representation", "input modalities" ]
submitted, no decision
https://openreview.net/pdf?id=kkUZ1FHlLaPAf
https://openreview.net/forum?id=kkUZ1FHlLaPAf
ICLR.cc/2014/conference
2014
{ "note_id": [ "YP3gsIWOuRsJ5", "WP-vWodmvRHtG", "LLqZOlEyzAOfh", "DD_QMgXNwToHG", "cOEBOC2QB3OyI", "77VF4PBnF67gl", "VKHoV-M4TDq6l", "HHsj_oAEjGHvi" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1391379540000, 1388090820000, 1389415080000, 1391867760000, 1394937240000, 1390944660000, 1389389760000, 1388872680000 ], "note_signatures": [ [ "anonymous reviewer ded8" ], [ "Daniel Silver" ], [ "Daniel Silver" ], [ "anonymous reviewer c0c4" ], [ "Daniel Silver" ], [ "anonymous reviewer acbe" ], [ "Daniel Silver" ], [ "Yann LeCun" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Paired-associate Images with An Unsupervised Deep Learning Architecture\", \"review\": \"Summary:\\n\\nThis paper presents a neural network that predicts a structured output. The authors proposed a new algorithm for training this structure. The new algorithm firstly pretrains the neural network as a stack of RBMs, and subsequently finetunes the model by untying the recognition and generation weights (which is as authors mention reminiscent of wake-sleep algorithm.)\", \"novelty\": \"Unfortunately, I don't see any novelty in the proposed approach. The exactly same neural network was proposed earlier by Ngiam et al. (2011). Also, it is very close to the multi-modal DBM proposed by Srivastava & Salakhutdinov (2012). The authors may argue that Ngiam et al. (2011) did not attempt to actually generate a missing modality, but precisely that was done by S & S (2012). Furthermore, I don't think the proposed wake-sleep-type algorithm should be better than finetuning the whole model (or each path) as an (denoising) autoencoder.\", \"pros\": \"(1) Representation learning from multiple modalities is important.\", \"cons\": \"(1) Experiments are weak.\\n(2) Explanation of the methods could be done better.\\n(3) The relationship to the previous research should be made more explicit and clear.\", \"detailed_comments\": [\"Sec 2. 1st paragraph: 'back-prop ANNs are ... not as good for reconstructing, or recalling a pattern' => I don't really understand this sentence. Does it mean that the backpropagation (or SGD with backprop) is not able to find a solution of a neural net even if the input and target were constructed explicitly to make the neural net reconstruct corruption or noisy input?\", \"Sec 2.1, the end of the 1st paragraph: I don't think the last sentence is correct. First of all, what is the energy state? Isn't it a simple scalar corresponding to the log of unnormalized probability? Then, if two orthogonal patterns were in data, shouldn't their probabilities (or energy states) be close to each other regardless of their orthogonality?\", \"'Weight is updated until the global energy E reduces below a threshold' What is the global energy? Do you mean the average of all training samples' energies?\", \"Sec. 2.1., the end of the 2nd paragraph: 'w_ij is equal to the probability of feature h_j given input v_i'. Why is it so? Shouldn't the probability of h_j given v_i require marginalizing out all v_j (j\", \"eq i), which probably is computational intractable and without any analytical form?\", \"Sec. 2.2, the end of the third paragraph: I believe many people consider convent as one of deep learning methods as well.\", \"Sec. 3.1, 2nd paragraph: 'RBM is unable to recall patterns when only half of the visible neurons are given correct pattern values' I completely disagree with this sentence. It may highly depend on data as well as the model size. On MNIST (which the authors used for their experiments), I believe reconstructing the missing half is not too difficult and can be done pretty well with a reasonably large and well-trained RBM.\", \"Sec. 3: What is noticeably missing in Sec. 3 is how the authors actually reconstruct the missing modality given the other modality. Is it simply a single forward pass from one modality to the other using the recognition, then generation weights? Does it involve sampling at each layer in between? If so, it's likely that p(x2 | x1) has multiple modes (x2 missing modality, m1 observed modality). How do you resolve among multiple possible reconstructions? If there's no sampling involved, why does the proposed model work better than a conventional autoencoder (two-way) trained with SGD and backprop? Is it possible that the problem of the conventional NN is due to learning difficulty only?\", \"Sec. 3.1, the last paragraph: I believe S&S (2012) did not do supervised finetuning for all experiments. For instance, for image retrieval, multimodal DBM does not require any discriminative finetuning.\", \"Sec. 4: In general, I'm not sure why the authors had to use only that small dataset. And, due to this small size dataset and the pretraining strategy used during the experiments, each parameter of the BP-ANN gets unfairly smaller number updates.\", \"Sec. 4.1, the last paragraph: why do you suspect that? I think it's simply that BP-ANN wasn't trained enough.\", \"Sec. 4.2: you should state the type of noise you used.\", \"Sec. 4.2, the last paragraph: 'DLAs attempt to probabilistically differentiate features from noises' I think I understand what the authors are trying to say, but I'm not entirely sure if that's correct or I'm not misunderstanding. Anyway, this sentence sounds somewhat weird (if not wrong).\", \"Sec. 4.2, the last paragraph: 'BP ANNs .. features ... are for the purpose of mapping and not reconstruction' I'm lost here. How does 'mapping from one modality to the other' differ from 'reconstructing the other modality from one modality'?\", \"Sec. 4.3, Results and Discussion: 'using non-paired examples to better develop .. associate learning system' Either this sentence has been mistyped, or I'm misunderstanding the Fig. 11 completely. It seems to me that adding non-paired examples does not help at all. Also, the remaining of the paragraph after this sentence is extremely difficult to understand, and I'd suggest to rephrase it.\"]}", "{\"review\": \"We just noticed that in the last figure, Figure 11, the odd->even value for model 4 is incorrect, it should be 89.0, not 90.88, and the error bars are approximately the same height as the other odd-->even stats. The average of 90.34 for model 4 and its error bars are correct. Will ensure correction is made in final version.\"}", "{\"review\": \"Ok .. Sorry.. apparently I did not complete the update on arXiv. Completed that today, and the new version should appear on Jan 14. Regardless, the only difference in the paper is as stated above.\"}", "{\"title\": \"review of Learning Paired-associate Images with An Unsupervised Deep Learning Architecture\", \"review\": \"The authors propose a an algorithm, in the setting of multimodal data, for learning to generate one modality given the other. The algorithm contains stack of RBM's for each modality, a join RBM on the top and then a predictor of joint probabilities from one modality. The experiments are too weak to demonstrate strength of the proposed approach.\", \"novelty\": \"Small. A standard multimodal deep belief net with an addition of a relatively obvious idea, which is anyway quite similar to previous ideas.\", \"quality\": \"Experiments are too weak.\", \"details\": \"First, the main complaint - the weakness of the experiments. Whether feedforward network Figure 6 can learn the task depends on how you optimise it. I think if you do a good optimisation then it would do it. In particular you can take your pre trained deep network Figure 4 and, go up the left and then down the right as your feedforward net - which is essentially the idea in 'Multimodal deep learning' (http://ai.stanford.edu/~ang/papers/icml11-MultimodalDeepLearning.pdf).\\n\\nThe task is also too simple - may be it would be at least good to pair a random digit from one class with a random digit from another. But even better (really necessary) it would be to do some different sets of data as in the above mentioned paper.\", \"other_details\": \"Page 1, paragraph 2 - I wouldn't say deep learning typically uses RBM's. (e.g. feedforward networks, auto encoders\\u2026)\\nPage 2, Background - I don't think we have an ability to recover *complete* information. Also 'many do not work in the same fashion as the human visual system' - and which do?\\u2026\\nPage 3, 4th paragraph - you should write the cost function between the two probabilities\\nPage 7 - What does it mean 'Features \\u2026 for the purpose of mapping and not reconstruction'\\nPage 8 - Results and Discussion - First you say that the results are not significant (within error bars) and then you draw a conclusion as if they were.\"}", "{\"review\": \"My thanks to the reviewers for their great comments. My apologies for not providing feedback sooner. New responsibilities had to be dealt with in Jan and Feb. This having been said, we are now moving forward with the advancement of this initial research, most notably the use of multiple mixed modalities (ear - audio in, voice - audio out, eye - image in, drawing - image out). One item I will clarify for the reviewers -- A method such as BP that can clearly do a great job fine-tuning the weights between one of the RBM stacks and another via the top layer, does not scale well to more than two modalities. This is the reason we use the back-fitting fine-tuning method - it trains the recognition weights local to each RBM stack and the top layer - so it does scale to multiple modalities.\"}", "{\"title\": \"review of Learning Paired-associate Images with An Unsupervised Deep Learning Architecture\", \"review\": \"This paper is trying to employ DBNs at the traditional task of associative memory. This is an interesting problem as the human brain is thought to contain an \\u201cassociation cortex\\u201d dedicated to combining sensory modalities. However, while the paper is written reasonably well, it did not introduce any significantly new learning method. The experimental section is weak and leaves this reviewer unconvinced of their conclusions.\\n\\nThe framework of having dual modalities has been proposed by the original DBN paper [8] (the reference is missing a third author). Contrastive wake-sleep algorithm is proposed for unsupervised learning in [8], which seems more principled than what this paper proposes in the paragraph starting with \\u201cTo fine-tune channel 1\\u2026\\u201d.\\n\\nIn the experimental section, the natural comparison is to an autoencoder (e.g. net in figure 6), which is also trained in an unsupervised manner. However, it is hard to believe that a network with 500-1000-500 hidden nodes can\\u2019t reconstruct better than what is shown in Fig. 7. last row. What kind of learning algorithm was used? CG or SGD, did the optimization converge? The authors also ref \\u201cHinton\\u2019s software\\u201d and it\\u2019s 1.15% error on MNIST. However, that is a totally different net with 10 1-of-k label units. It is unclear what the authors mean by using hinton\\u2019s software: was there no additional learning for your particular image pair association task been performed? If that is the case, then the results are believable but not a good baseline.\", \"several_suggestions_to_improve_the_paper\": \"compare with the contrastive wake-sleep algorithm and autoencoder trained with CG/SGD. Investigate more in depth on your proposed algorithm, is it approximating some objective? You should truly use multimodal data like images and speech, since it is so often mentioned in the introduction/background sections. It is a lot of work to combine audio and natural images are much higher dimensionality, but this would make the paper stronger.\\n\\nThe claim in section 2.2 that \\u201cThe mammalian brain is..\\u201d is a huge claim which is not proven. It is the current mainstream theory, but you should not treat it as a fact. Note that models like [14] do not even do learning except the last SVM layer.\\n\\nThe claim that RBM can\\u2019t recall patterns when half of is corrupted is not convincing and maybe should be qualified to a particular task/learning algorithm. There are papers on rbm and noise+occlusion which suggest otherwise. It is true that how RBM performs is dependent on how it is trained (e.g. using fast pcd is critical).\"}", "{\"review\": \"Thanks .. An update was made on Dec 30, 2013.\"}", "{\"review\": \"Daniel, you should update your paper right now. There is no limit on how many times you can update your paper.\"}" ] }
NPFdalK3djNuI
A Generative Product-of-Filters Model of Audio
[ "Dawen Liang", "Mathew D. Hoffman", "Gautham Mysore" ]
We propose the product-of-filters (PoF) model, a generative model that decomposes audio spectra as sparse linear combinations of 'filters' in the log-spectral domain. PoF makes similar assumptions to those used in the classic homomorphic filtering approach to signal processing, but replaces hand-designed decompositions built of basic signal processing operations with a learned decomposition based on statistical inference. This paper formulates the PoF model and derives a mean-field method for posterior inference and a variational EM algorithm to estimate the model's free parameters. We demonstrate PoF's potential for audio processing on a bandwidth expansion task, and show that PoF can serve as an effective unsupervised feature extractor for a speaker identification task.
[ "model", "pof", "generative", "audio", "generative model", "audio spectra", "sparse linear combinations", "domain", "similar assumptions", "classic homomorphic" ]
submitted, no decision
https://openreview.net/pdf?id=NPFdalK3djNuI
https://openreview.net/forum?id=NPFdalK3djNuI
ICLR.cc/2014/conference
2014
{ "note_id": [ "V8YrFBBMrfzkS", "6CSfCJNrFwTBG", "ER-3R_wXBkE96", "hh_ljY71lXjL8", "yTaSy6E1v4TmH", "mmzBd1sABZxDo", "ik7rkJpxW08SP" ], "note_type": [ "review", "comment", "comment", "comment", "review", "review", "comment" ], "note_created": [ 1391481540000, 1392728160000, 1392759540000, 1392727860000, 1392759420000, 1391486280000, 1392727560000 ], "note_signatures": [ [ "anonymous reviewer 6e74" ], [ "Dawen Liang" ], [ "Matt Hoffman" ], [ "Dawen Liang" ], [ "Matt Hoffman" ], [ "anonymous reviewer 62eb" ], [ "Dawen Liang" ] ], "structured_content_str": [ "{\"title\": \"review of A Generative Product-of-Filters Model of Audio\", \"review\": \"Summary: This paper describes a simple generative model for audio spectra from a single sound source. It explains them as products of spectral bases, i.e., sums of log-spectral bases. The paper describes a mean field variational approximation for the posterior of the activations of each basis given an observed spectrum, and a variational EM algorithm for learning the model parameters. It applies the model to performing bandwidth extension for unknown talkers and frame-based speaker identification, outperforming reasonable baseline approaches in both.\", \"novelty_and_quality\": \"As far as I know, this method is novel. The background literature review correctly points to the connections with traditional methods of homomorphic filtering and non-negative matrix factorization. While it is not entirely clear how widely this method can be applied, especially with the high current computational cost of learning the model parameters compared to a traditional fixed basis like mel frequency cepstral coefficients (MFCCs), it does significantly outperform baseline systems on the two experimental tasks described. Specifically, real sounds only rarely involve a single source, but, as mentioned in the paper, this model could provide a prior on each source in a multi-source mixture.\", \"pros\": [\"Interesting model\", \"Shows good experimental results\", \"Well written\"], \"cons\": [\"Some mathematical details could be spelled out more explicitly\", \"Possibly limited applicability\"], \"minor_comments\": \"\", \"page_2\": \"'Another attractive feature is the symmetry between the excitation signal e[n] and the impulse response h[n] of the vocal-tract filter\\u2014convolution commutes.'\\n\\nPlease explain why this is an attractive feature of the model. It seems to be an additional nuisance degree of freedom in the model.\", \"page_4\": \"The definition of q(a_{tl}) uses a_{tl}, where everything else uses a_{lt}. Is this intentional?\", \"page_6\": \"'We can approximate this posterior using the variational inference algorithm from section 4.1'\\n\\nCan you show this more explicitly?\"}", "{\"reply\": \"We thank the reviewer for the helpful comments and suggestions. We will try to address some of these below: (There is a revised version on arXiv now.)\\n\\n***Bandwidth expansion experiment***\\n\\nWe respectfully disagree with the comment that 'The gains using the proposed model are however not very significant.' Compared with the best competing approach our PoF-based approach achieves a gain of more than half a point on Hu and Loizou's OVRL composite objective measure, going from a predicted subjective evaluation of 'poor-to-fair' to 'fair-to-good'. Admittedly the results on the STOI intelligibility metric are less impressive, but given that telephone-quality speech is already quite intelligible we feel that this metric is less meaningful than the OVRL metric.\\n\\n***Speaker identification experiment***\\n\\nIn our most recent submission we have expanded this experiment from the original 10-speaker task to a much more difficult 200-speaker task, and explored how PoFCs and MFCCs perform as more data becomes available from each speaker. In all cases PoFCs dramatically outperformed MFCCs.\\n\\n***Applicability to speech/phoneme recognition***\\n\\nThe reviewer asks 'Can phoneme recognition experiments be performed using TIMIT using the derived features?' This is indeed a natural question, and we ran a phoneme recognition experiment to answer it. We trained a deep neural network (using dropout and rectified linear nonlinearities) using PoFCs and another using the log-output of a melfilterbank. The network using PoFCs achieved a phone error rate (PER) of ~24%, and the network using the raw filterbank achieved a PER of ~21%, which is near the current state of the art on TIMIT.\\n\\nThis suggests that, although PoFCs do capture much of the information in the spectrum, the deep network works best with lower-level data. This is perhaps not surprising; the deep network learns features that are tuned specifically for this phoneme recognition problem, whereas PoFCs are trained without supervision. We think that PoFCs are more likely to be useful in settings where less data per class is available, such as in the speaker recognition problem described in section 5.2.\"}", "{\"reply\": \"Please disregard the comment about the new speaker ID experiment\\u2014we found a bug in our evaluation that invalidates those results.\"}", "{\"reply\": \"As for the computational cost:\\n\\nIn the interest of simplicity we developed learning and inference algorithms based on LBFGS, whose convergence is reliable but slow. We are currently exploring alternate algorithms that will bring down the computational cost and make the method more practical for industrial applications.\"}", "{\"review\": \"Please ignore the 'v3' version of the paper on the arXiv; we found a bug in the new speaker identification experiment that invalidates those new results. A new version reverting to the previous results has been submitted and will be posted shortly. We sincerely apologize for this mistake.\"}", "{\"title\": \"review of A Generative Product-of-Filters Model of Audio\", \"review\": \"Very interesting, well written paper describing a novel model for explaining speech using a set of filters trained in a generative fashion. The paper describes a \\u201cproduct of filters\\u201d model, proposes a mean-field method for posterior inference and a variational EM algorithm to estimate the model\\u2019s free parameters. The paper draws parallels with the proposed approach and homomorphic filtering methods, NMF and its variants for speech analysis.\\n\\nThe paper however does not have a comprehensive experimental framework to evaluate the usefulness of the proposed model. Bandwidth expansion is a natural candidate for generative models and is an example used with similar techniques in vision - for example the recently proposed sum-product networks [Poon and Domingos, 2011]. The gains using the proposed model are however not very significant. \\n\\nAlthough the authors do state that they are not attempting to build a state-of-art speaker identification system, the experiment is nevertheless performed in a limited setting. It is hence not possible to completely evaluate the usefulness of features derived using the proposed technique. Can the features derived using the proposed framework generalize to be useful for more realistic speaker identification experiments? Do they generalize well? A natural additional question to ask would be \\u2013 are these features useful in recognizing speech sounds since this is a generative model of speech? Can phoneme recognition experiments be performed using TIMIT using the derived features? Do the learnt bases represent classes of sound?\"}", "{\"reply\": \"We thank the reviewer for the helpful comments and suggestions. We will try to address some of these below. (There is a revised version on arXiv now.)\\n\\n***Page 2: 'Another attractive feature is the symmetry between the excitation signal e[n] and the impulse response h[n] of the vocal-tract filter\\u2014convolution commutes.'\\nPlease explain why this is an attractive feature of the model. It seems to be an additional nuisance degree of freedom in the model.***\\n\\nThe attractive aspect of this feature is that it allows us to avoid building separate excitation and vocal-tract models; although in a sense it does add a nuisance degree of freedom, the symmetry simplifies the model.\\n\\n***Page 4: The definition of q(a_{tl}) uses a_{tl}, where everything else uses a_{lt}. Is this intentional?***\\n\\nThanks for catching this---we have corrected it.\\n\\n***Page 6: 'The periodic \\u201cexcitation\\u201d filters tend to be used more rarely, ... the excitation signal generated by a speaker\\u2019s vocal folds.' \\nThis statement seems to assume that multiple excitation bases cannot be combined into a valid excitation, while multiple vocal tract shape bases can be combined to create a valid vocal tract shape. Can you prove or demonstrate that this is the case?***\\n\\nThere is no explicit anticorrelation that discourages two 'excitation'bases from appearing together, but the values of alpha in figure 2(b) imply that the model uses each of these bases rarely, and that the chances of seeing any two of them in the same spectrum are small. By contrast, the much larger alpha values in figure 2(a) imply that each of those 'vocal-tract' bases are at least a little bit active in almost every spectrum.\\n\\n***Page 6: 'We can approximate this posterior using the variational inference algorithm from section 4.1'\\nCan you show this more explicitly?***\\n\\nWe added some description on how this was done. We will spell this out more explicitly in a subsequent revision if there is still confusion.\\n\\n***Possible applicability***\\n\\nWe have a recently accepted ICASSP paper where we applied the PoF model to the problem of speech decoloration/dereverbertion. The idea, given the model, is very straightforward -- we keep all the speech filters fixed and learn an extra coloration filter which is always on. The experimental results indicate that the PoF can effectively capture and remove the coloration.\"}" ] }
wxobw18IYOxu4
Group-sparse Embeddings in Collective Matrix Factorization
[ "Arto Klami", "Guillaume Bouchard", "Abhishek Tripathi" ]
CMF is a technique for simultaneously learning low-rank representations based on a collection of matrices with shared entities. A typical example is the joint modeling of user-item, item-property, and user-feature matrices in a recommender system. The key idea in CMF is that the embeddings are shared across the matrices, which enables transferring information between them. The existing solutions, however, break down when the individual matrices have low-rank structure not shared with others. In this work we present a novel CMF solution that allows each of the matrices to have a separate low-rank structure that is independent of the other matrices, as well as structures that are shared only by a subset of them. We compare MAP and variational Bayesian solutions based on alternating optimization algorithms and show that the model automatically infers the nature of each factor using group-wise sparsity. Our approach supports in a principled way continuous, binary and count observations and is efficient for sparse matrices involving missing data. We illustrate the solution on a number of examples, focusing in particular on an interesting use-case of augmented multi-view learning.
[ "matrices", "embeddings", "structure", "collective matrix factorization", "technique", "representations", "collection", "shared entities", "typical example" ]
submitted, no decision
https://openreview.net/pdf?id=wxobw18IYOxu4
https://openreview.net/forum?id=wxobw18IYOxu4
ICLR.cc/2014/conference
2014
{ "note_id": [ "njaTjD3NdxzcC", "Fy6TFRhAT_wUd", "Ki1mKRAQP7Cdb", "A8K4fdGEbvfo0" ], "note_type": [ "comment", "review", "review", "comment" ], "note_created": [ 1392735540000, 1391811840000, 1391730660000, 1392735600000 ], "note_signatures": [ [ "Arto Klami" ], [ "anonymous reviewer 75c5" ], [ "anonymous reviewer 9dec" ], [ "Arto Klami" ] ], "structured_content_str": [ "{\"reply\": \"This is a joint response for both Anonymous 9dec and 75c5, since both reviews address similar issues.\\n\\n\\nWe thank both reviewers for pointing out the typos, and in particular the sloppy phrase used to describe basics of VB approximation. We have submitted a revised version that fixes these mistakes, as well as addresses the two other issues described below. It should be out by Feb 19th.\\n\\n\\n1) Regarding complexity: While several recent advances were indeed needed to derive the model, the resulting algorithm is reasonably straightforward. To address this issue we now added more detailed description of the algorithm in the Supplementary material. Furthermore, we will later add a link to an open source implementation in R (the documentation still needs a bit of polishing).\\n\\n\\n2) Regarding the scale of the experiments: The artificial data experiments are small mainly to emphasize the effects of jointly modelling multiple matrices. Similarly to how e.g. multi-task learning helps most with small sample sizes, properly modeling the private factors is more important when (some of) the entity sets are small -- given enough data even incorrectly specified models often work reasonably well. Having said that, scalable algorithms are needed especially in settings where we have few examples in important views (i.e. views we want to predict) and many examples for auxiliary data matrices.\\n\\nEven though scalability has not been our main focus, the recommender system application in Section 7.3 briefly illustrates the efficiency. In the revised version we now me mention that both of the recommender system data sets are in the order of 1 million observations and the results were obtained in a few minutes on a laptop; this is comparable to the computation time of the competing convex CMF (CCMF) method for a single set of regularization parameters, but CCMF needs cross-validation over two such parameters do be used in practice.\\n\\nWe have not tried the algorithm on massive collections, but we expect it could be scaled up fairly nicely with a bit of implementation effort. The slowest part is the gradient computation that would parallelize easily, and switching to stochastic gradients would also be possible. Note also that the time complexity of the proposed VB algorithm is effectively the same as the corresponding MAP problem: the small overhead is only due to the computation of the parameters of the diagonal covariance matrices, the updates of the mean being similar to the updates of the MAP algorithm.\"}", "{\"title\": \"review of Group-sparse Embeddings in Collective Matrix Factorization\", \"review\": \"Collective matrix factorization (CMF) is a method for learning entity embeddings by jointly factorizing a collection of matrices, each of which describes a relationship between entities of two types. The set of rows/columns of a matrix corresponds to an entity type, while each row/column of the matrix corresponds to a different entity of the type. This approach assumes that all dimensions of the embeddings of entities of a particular type are relevant for modelling all the relationships they are involved in. This paper extends CMF by relaxing this assumption and allowing embeddings to have dimensions used for modelling only some of the relationships the entities are involved in. This is achieved by extending the model to include a separate precision for each embedding dimension of each type to encourage group-sparse solutions. The authors propose training the resulting models using a variational Bayes algorithm and show how to handle both Gaussian and non-Gaussian observations.\\n\\nThe paper is nicely written and makes a small but novel extension to CMF. The resulting approach is simple, and seems scalable and widely applicable. The experiments are fairly small-scale but are sufficient for illustrating the advantages of the method.\", \"corrections\": \"In the section dealing non-Gaussian observations, the pseudo-data is referred to as Y instead of Z_m.\\n\\nThe description of variational Bayes as 'minimizing the KL divergence between the observation probability and ...' is not quite right, as it seems to describe KL(P||Q) instead of KL(Q||P).\", \"section_8\": \"'unability' -> 'inability'\"}", "{\"title\": \"review of Group-sparse Embeddings in Collective Matrix Factorization\", \"review\": \"The manuscript articulates a problem with earlier solutions to the Collective Matrix Factorization (CMF) problem in multi-view learning formulations and proposes a novel solution to address the issue. The concern is that current CMF schemes ignore the potential for view-specific structure or noise by implicitly assuming that structure must be shared among all views. The authors solve this problem by putting group-sparse priors on the columns of the matrices. This allows private factors that can be specific to one or even a subset of the matrices. Also note that the use of variational Bayesian learning by the authors provides a large reduction in computational complexity relative to the MAP estimates used in some of the prior literature.\\n\\nI agree with the importance of the problem being addressed since, clearly, the need to accommodate view- or subset-specific structure is going to be important in many real-world problems. Also noted is the elimination of the need for tunable regularization parameters.\\n\\nThere are a couple of typos in the first paragraph of section 4.2 (top right of page 4). The manuscript is heavily dependent on several of its sources for implementation details of the complete algorithm. This isn't a criticism, since the authors should not repeat details available elsewhere, but I think it is important to understand that this is necessitated by the complexity of the method, and this complexity is a small drawback and potentially an area for future improvement.\\n\\nAnother issue is that it would be valuable to see the proposed scheme compared to a wider variety of alternatives in the experimental section (mostly for context that elucidates the importance of CMF itself and therefore their improvement of it for certain applications). However, given the scope of the paper in general, this is a minor point.\"}", "{\"reply\": \"See the reply for Anonymous 9dec -- we addressed also your remarks in a joint response.\"}" ] }
EXqiEZhNias13
Continuous Learning: Engineering Super Features With Feature Algebras
[ "Michael Tetelman" ]
In this paper we consider a problem of searching a space of predictive models for a given training data set. We propose an iterative procedure for deriving a sequence of improving models and a corresponding sequence of sets of non-linear features on original input space. After finite number of iterations $N$ the non-linear features become $2^N$-degree polynomials on original space. We show that in a limit of infinite number of iterations derived non-linear features must form an algebra, so for any given input point a product of two features is a linear combination of features from same feature space. Due to convexity of each iteration and its ability to fall back to solutions found in previous iteration the models in the sequence have always increasing likelihood with each iteration while dimensionality of each model parameter space is set to a limited controlled value.
[ "features", "continuous learning", "engineering super", "sequence", "models", "iterations", "iteration", "feature", "feature algebras", "problem" ]
submitted, no decision
https://openreview.net/pdf?id=EXqiEZhNias13
https://openreview.net/forum?id=EXqiEZhNias13
ICLR.cc/2014/conference
2014
{ "note_id": [ "gOfXgzVJmQgc2", "33wYiE_DWa1DA", "XrvOXmJEOQXE3", "7OTyOhG2uU7l6", "FmlK4JzwtCFMj", "n7hyv-DWypM1K", "kcguUL7qOvexy", "48VR5H0ILApmy", "f2gY2I9y3V2Jt", "ytLTwU0sTRwkv", "Auc96IjxzaAB5" ], "note_type": [ "review", "comment", "review", "comment", "review", "comment", "comment", "comment", "review", "comment", "comment" ], "note_created": [ 1391496960000, 1392269280000, 1391858760000, 1393370460000, 1391811180000, 1393474440000, 1392269880000, 1392269760000, 1392770400000, 1393501440000, 1393351380000 ], "note_signatures": [ [ "anonymous reviewer 93f6" ], [ "Michael Tetelman" ], [ "anonymous reviewer b2f7" ], [ "Michael Tetelman" ], [ "anonymous reviewer 1483" ], [ "Olivier Delalleau" ], [ "Michael Tetelman" ], [ "Michael Tetelman" ], [ "Michael Tetelman" ], [ "Michael Tetelman" ], [ "Olivier Delalleau" ] ], "structured_content_str": [ "{\"title\": \"review of Continuous Learning: Engineering Super Features With Feature Algebras\", \"review\": \"The idea presented in this paper consists in iteratively building new features\\nfrom products of so-called 'super features', obtained from a PCA in parameter\\nspace of multiple models trained on a subset of the data. The theoretical\\nanalysis is done with logistic regression, and no experiments are performed.\\n\\nIn its current form, this paper is definitely not ready for publication. The\\nfact that there is no experiment is already a pretty big downside, and the\\ntheory does not seem complete enough to me to justify it. In particular:\\n- There is no proof or even intuition for the convergence of the procedure. To\\n be fair, there is no claim that it will converge either, but what is the\\n point of Section 5 if this is not the case?\\n- The abstract mentions a guarantee of increasing likelihood which is not shown \\n in the paper.\\n- There is no analysis of the complexity of the algorithm (it seems costly, \\n especially since each iteration involves sampling multiple datasets).\\n- There is no discussion of extensions beyond logistic regression.\\n- There is no justification for using PCA in parameter space (which works best \\n with Gaussian-like unimodal distributions, but may fail in other situations:\\n how can we tell it makes sense for a specific application / model?)\\n\\nThat being said, this is an intriguing idea, and I am hoping the author will be able to investigate it more thoroughly so as to be able to present it in a more\\nfleshed out paper in the future.\\n\\nThere are several points that were unclear to me and may be worth mentioning in\", \"addition_to_the_above_high_level_comments\": \"- The equation below eq. 1 does not seem to be used anywhere.\\n- 'Values of regularization parameters must be found by maximizing Bayesian \\n integral in Equation 1.' => this integral is the likelihood -- optimizing\\n regularization parameters to maximize the likelihood seems wrong to me.\\n- '[eq. 1] is estimated by maximum likelihood approximation': I do not see how \\n MLA is a good approximation here.\\n- eq. 5 seems arbitrary to me, especially since the datasets are of different\\n sizes, so each L_w_s involves a different number of terms.\\n- It is not mentioned exactly how the 'sample data' step works (for instance in\\n bootstrap one typically samples t_max points with replacement, but it does\\n not seem to be the case here).\\n- I do not understand how G_alpha,beta can be non-zero in eq. 16, and still\\n lead to a linear combination.\\n- Not clear what are func and F in eq. 19.\"}", "{\"reply\": \"I would like to thank the reviewer for the comments. My answers are given in the same order as reviewer\\u2019s items.\\n\\n1. I agree that not presenting experiments in the paper is a big downside. I will present experiments somewhere else.\\n2. I agree that convergence requires a special investigation. However, for a limited training data set and with constraints on number of parameters (PCA) I believe it it is reasonable to expect that the iterative process described in paper will converge to polynomials of some finite degree.\\nThis argument seems to be sufficient for the purpose of the paper. The formal proof will be published somewhere else.\\n3. Because each iteration consists of solving a series of convex problems that contain all previous solutions, when parameters for products of features are zero, new solutions must have higher likelihoods or be the same as already found solutions (excluding some degenerate cases when multiple solutions have same likelihood).\\n4. Cost of the algorithms is contained because \\na. for each iteration most of samples of solutions could be obtained from small subsets of training data - number of reads of training data is expected to have a highest cost;\\nb. number of iterations is limited because after N iterations features become 2^N degree polynomials, which is reasonable to be limited - please see the convergence argument above.\\n5. The ideas presented in the paper are not based on any specifics of logistic regression. \\nThe only important property is that models are presented by probabilities that are predefined functions of scalar products of parameters and features.\\nThe variety of model that has this property is much bigger set than a logistic regression case.\\n6. PCA is selected as a simple and reliable method of dimensionality reduction. Other methods could be used as well. \\n7. Regularization parameters _could_ be found by maximizing Bayes integral in Eq.1, because it is not just a likelihood - it is a probability of data for a given model and regularization parameters, that contains a _normalization_ _factor_ from prior probability of parameters (that is important!). That normalization factor cannot be estimated by MLA-type method and must be computed exactly or with high enough accuracy for any possible values of regularization parameters.\\nThen, maximizing probability of data by regularization parameters will produce regularization solutions that are equivalent to regularization parameters found by maximizing likelihood of a validation set.\\n8. The iterative procedure in paper does not depend on finding best parameter-solutions. It depends on obtaining samples in parameter space with high likelihood. MLA is a reasonable method to get these samples. There is no need for iterative process to have very good estimates for Bayesian integrals in Eq.1.\\n9. Eq.5 is an approximation. It is selected due its simplicity for practical computations. It may need a revision.\\n10. Sampling with replacement is a reasonable approach. The whole idea of iterative approach in the paper is to find a set of features that minimally dependent on selection of training set and representative for any sampled set from available training data.\\n11. I will clarify the equation for the algebra in a revised version of the paper.\", \"that_will_look_like_the_following\": \"\", \"iterative_process_will_converge_when_set_of_super_features\": \"F_a(x), a=1..NumFeatures, with bias super-feature F_0(x) will satisfy algebra equations:\\n\\nF_a(x) F_b(x) = sum_d C_a,b,d F_d(x) + F_0(x)G_a,b , where C_a,b,d and G_a,b are constants in regard to x.\\n\\n12. Eq.16 shows a property of feature algebra space: any function on feature space that could be represented by power series due to algebra is a linear function of features.\"}", "{\"title\": \"review of Continuous Learning: Engineering Super Features With Feature Algebras\", \"review\": \"> - A brief summary of the paper's contributions, in the context of prior work.\\n\\nThe way we represent data -- the features we use -- has proven essential to a wide variety of tasks in machine learning. Often, the trivial features we naturally get from the data perform poorly. One approach is to hand-engineer features, having humans carefully construct features based on their understanding of the data and the task. Another approach, taken in deep learning, is to learn features as part of the optimization process of a multi-layer model. This paper proposes an alternative approach in which 'super-features' are generated by an iterative exploration process.\\n\\nAs the reviewer understands it, this iterative process begins with random sigmoidial features. Features are evaluated on several subsets of the dataset and good features are selected. Principal component analysis is performed on the parameters of the good features in order to define a lower-dimensional space of good features. The next iteration proceeds on products of the principal components discovered in the previous step. In the limit, the discovered features form a space with nice algebraic properties.\\n\\n> - An assessment of novelty and quality.\\n\\nThe reviewer is not an expert but to the best of their knowledge this approach is novel.\", \"the_reviewer_is_concerned_that_this_paper_may_be_a_bit_rushed\": \"it suffers from many grammatical errors, and some parts of the paper are hard to follow. The reviewer thinks that a bit of further revision would greatly benefit this paper.\\n\\n> - A list of pros and cons (reasons to accept/reject)\", \"pros\": [\"The paper introduces some interesting, novel ideas. In particular, attempting to do dimensionality reduction in parameter space seems like a really interesting strategy. (I'm concerned that PCA may not be a very good way to do this, but the idea is very interesting.)\"], \"cons\": [\"The paper does not do any experiments to test the efficacy of the proposed approach. Given how radically different the proposed approach is from things that have been previously tried, it seems really difficult to have any significant confidence in it without experimental results. (Not having experimental results for this would seem more appropriate if this were a workshop track submission.)\", \"The paper suffers from many grammatical issues. These really need to be fixed in a final version of this paper.\", \"The paper is quite hard to follow at some points.\", \"Testing this approach on a standard dataset would also provide an opportunity for the author to walk the reader through a concrete application of this approach, in addition to providing experimental validation of it.\"]}", "{\"reply\": \"Please see the latest version of the paper, where I added some clarifications about finding the optimal regularization and the regularized solution w*. This is not a major result of the paper and I will publish detailed analysis of the regularization method in a separate publication somewhere else.\\n\\nHere I am going to give a few points on the subject.\\n1. P(y|x,w) is a model definition. For example, linear regression, logistic regression or some other probability distribution that depends on data (y,x) and parameters w.\\n2. P0(w|r) is a prior probability distribution of parameters that depends also on regularization parameters r. It does not depend on data.\\nFor example, for L2 regularization P0(w|r)=exp(-r*R(w))/NormFactor(r),\\nwhere R(w)=sum(w^2) and NormFactor(r)=Integral_by_w[exp(-r*R(w))].\\n3. Log-likelihood depends on both w and r, it is given by following expression \\nL(w,r) = sum_by_training_set[ln(P(y|x,w))]-r*R(w)-ln(NormFactor(r));\\nThen, MLA solution is obtained by maximizing L(w,r) by w with fixed r. As stated in the paper, the optimal regularization r and the corresponding solution w* is found by maximizing L(w,r) w.r.t. both w and r.\\n4. The solution found in the approach above in (3) will avoid over-fitting. Also, it is possible to show that this method is equivalent to finding the optimal regularization by using the cross-validation, which is maximizing a likelihood of validation data set by r on solutions w(r) obtained by maximizing the likelihood in (3) above by w at a given r.\"}", "{\"title\": \"review of Continuous Learning: Engineering Super Features With Feature Algebras\", \"review\": \"This paper presents an iterative way of defining increasingly complex feature spaces for prediction model selection.\\nIt is argued that in the limit, this sequence of features results in a feature algebra, meaning that products of features are linear combinations of features within the feature space. \\n\\nThe mathematical ideas could be interested if explained in more clarity. \\nThe nonlinear super features look like an interesting way of reasoning of feature compositions, however, \\n\\n* I am not sure about the usefulness of the construction. \\n\\n* Version 1 of the paper is quite unpolished.\", \"minor\": \"In eq. (4) `maxarg' should be `argmax'. \\nIn eq. (9) use `langle' and `\\nangle' instead of `<' and `>'. \\nEq. (6) seems not to be used in Section 3.\"}", "{\"reply\": \"It seems to me that in your example arbitrarily large values of your criterion (in #3) could be obtained with w=0 and r going to infinity. But I agree this is a minor point and it's not worth spending more time discussing it...\"}", "{\"reply\": \"I would like to thank the reviewer for commenting the paper.\\n\\n1. I will publish corrected version of the paper that will address grammatical and other issues asap.\\n2. I agree that experiments must be done, however, at this point it is not within scope of the paper and will be published somewhere else.\\n3. I agree that PCA is not the best way to reduce dimension of parameter space and some other method is worth to consider. However, PCA may work for many cases when there is a well defined global maximum of likelihood in parameter space.\"}", "{\"reply\": \"I would like to thank the reviewer for commenting the paper. My answers are given in same order as reviewers notes.\\n\\n1. I believe the method for constructing features that form an algebra could be useful for finding computable efficient representations for a given data, especially for images and other data that have natural symmetries.\\n\\n2. I will publish corrected version of the paper that will address grammatical and other issues asap.\"}", "{\"review\": \"The revised version of the paper is available in arxiv: at http://arxiv.org/abs/1312.5398\"}", "{\"reply\": \"To correctly consider r going to infinity you have to look at the contribution to integral from the point w=0. For example for L2 case P0(w=0|r) ->inf when r->inf, nevertheless Integral_by_w[P0(w|r)]=1, because single point w=0 does not contribute to the integral.\\n\\nSame is true for the Bayesian integral with training data. For the case of L2 regularization, simply rescale w->w/sqrt(r), then P0(w|r)dw->P0(v|1)dv will not depend on r at all and the whole likelihood cannot be arbitrary large when r goes to infinity. - We can always rescale w because w is an integration variable.\"}", "{\"reply\": \"Thanks for the clarifications. Just one comment on point 7: maybe I'm reading it wrong (and the paper should be clearer), but my understanding of eq.1 is that P(w) is the prior on parameters. The regularization parameters define this P(w), and if you optimize eq.1 w.r.t. P, you could very easily overfit (e.g. concentrating all probability mass on the w* that maximizes the likelihood of training data).\"}" ] }
8KokDTctkA8e4
Learning generative models with visual attention
[ "Charlie Tang", "Nitish Srivastava", "Ruslan Salakhutdinov" ]
Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by visual attention models in computational neuroscience and by the need for deep generative models to learn on object-centric data, we describe a framework for generative learning using attentional mechanisms. Attentional mechanism propagate signals from region-of-interest in a scene to higher layer areas of canonical representation, where generative modeling takes place. By ignoring background clutter, generative model can concentrate its resources to model objects of interest. Our model is a proper graphical model where the 2D similarity transformation from computer vision is part of the top-down process. A ConvNet is used to initialize good guesses during posterior inference, which is based on Hamiltonian Monte Carlo. Upon learning on face images, we demonstrate that our model can robustly attend to face regions of novel test subjects. Most importantly, our model can learn generative models of new faces from a novel dataset of large images where the location of the face is not known.
[ "generative models", "model", "visual attention", "visual attention attention", "psychologists", "important", "neocortex", "visual attention models", "computational neuroscience" ]
submitted, no decision
https://openreview.net/pdf?id=8KokDTctkA8e4
https://openreview.net/forum?id=8KokDTctkA8e4
ICLR.cc/2014/conference
2014
{ "note_id": [ "tiYHiyBGHPtuf", "ufIDfM96DXGE3", "Fe6EF0Bh0rb1D", "waohvbXH4ZvO1", "e9ZBYBHFViYsc", "fRoBOpqe7dfVP", "VtHoVZBCozVLx" ], "note_type": [ "review", "review", "comment", "review", "comment", "review", "review" ], "note_created": [ 1391872080000, 1392797040000, 1392796800000, 1391466600000, 1392796620000, 1391560620000, 1395068820000 ], "note_signatures": [ [ "anonymous reviewer 61e2" ], [ "Charlie Tang" ], [ "Charlie Tang" ], [ "anonymous reviewer de65" ], [ "Charlie Tang" ], [ "anonymous reviewer eb8f" ], [ "Nando de Freitas" ] ], "structured_content_str": [ "{\"title\": \"review of Learning generative models with visual attention\", \"review\": \"The proposed solution is a conditional model (conditioned on a large input image I), which augments the typical RBM with two additional sets of random variables: transformations parameters $u$ and a 'steerable' visible layer $x$ (a patch of I, whose location is defined by $u$). An additional term in the energy function ensures that low-energy configurations are given to settings of $u$, for which $x$ is close to $v$ in an L2 sense. Inferring the transformation parameters in this model is thus analogous to the task of objection detection. To shortcut this difficult optimization process the authors propose to use Hybrid Monte Carlo (HMC), but where the states $u$ are initialized via a convolutional neural network (CNN).\", \"i_find_the_general_idea_behind_the_paper_compelling\": \"the problems of attention and scaling up of generative models are important ones which warrant further attention. While a solid step in this direction, the experimental section does not convincingly show evidence in favor of the model, in particular, it lacks proper baselines.\\n\\nSection 6.1: How do state of the art object detection / tracking algorithms do on this task ? Normalized cross-correlation and L2 template matching do not seem like appropriate baselines.\\n\\nSection 6.2 aims to show that the 'novelty of our model is that it can learn on new large images of faces without label information'. Unfortunately, the authors provide little evidence in favor of this. After training a DBN and the approximate inference module on Caltech, the authors report a successfull detection rate of 951/1418, unfortunately without any context or comparisons. How does this compare against a standard Viola-Jones detector ? How would the model perform by simply clamping 'v' to the average face image ? I would not expect the proposed generative model to outperform a dedicated face detection module, but the baseline seems necessary to provide context. Furthermore, how are the samples in Fig. 6b evidence that the DBN learnt the Multie-PIE dataset ? Are the samples 'more' qualitatively similar to Multi-PIE than Caltech ? Without being familiar with Multi-Pie, this is not at all obvious. Samples in 6b appear similar (but worse) to 6a. A quantitative analysis seems necessary.\\n\\nThe experiments of Section 6.3 is interesting at a high-level, but its execution seems flawed. It appears that the authors clamped 'v' to the very patch (i.e. cropped face image from the test set) which they intend to detect (?). This seems like an all too trivial task. To highlight the benefits of having a proper generative model, v could have been clamped to e.g. the face of a different individual but of the same sex as the 'target'. A more convincing application might be with a classification RBM whereby clamping label units results in 'attending' to the corresponding areas of the image.\\n\\nThat being said, I do look forward to the next revision of the paper.\", \"other_points\": [\"A threshold parameter is used for detection in Section 6.2. How was this parameter chosen ? Precision/Recall seems like the only relevant measure here.\", \"I found Section 3 to be particularly confusing, in large part due to the notation which obfuscates the relationship between $x$ and $u$. From the energy function, one could not be blamed for thinking that the model is a simple 3-layer DBM (with constant term f(u), to be rolled into the partition function).\", \"I also found the description of the transformation parameters and warp w a bit confusing. For instance, 'used to rotate, scale [...] the canonical v' is a bit misleading, as v never actually undergoes any transformation. At a high-level, it seems simpler to think of u as selecting a patch x of I (where the probability of u is proportional to the L2 distance between x and v) ? One possible suggestion to help with clarity would be to delay the description of the warp transform to Section 4, and describe the model in terms of a generic transform T(I, u) ?\", \"Overloaded notation for p (both pixel coordinate and momentum variable)\", \"Contradictory statements about HMC and local minima.\", \"lots of typos\"], \"pros\": [\"novelty of model and potential for applications\", \"approximate inference scheme (which continues the trend of using function approximation for approximate inference)\"], \"cons\": [\"weak experimental evidence (lack of proper baselines)\", \"clarity of presentation\"]}", "{\"review\": \"We thank the reviewer for the thoughtful review, and we look forward to improving the manuscript in the future version.\\n\\nThe main motivation from this paper is not using ConvNet as a STOA object detector, but to allow for learning generative models of objects of interest in large unlabelled images. A good face detector would probably do just as well on localizing faces. However, our attentional inference process is a function of v, meaning that we can attend to whatever the top-down generative model 'has in mind'. Section 6.3 demonstrates that given the same input image but with different v, our model attends to the correct subject. \\n\\nWhile an off-the-shelf VJ face detector would do very well, it is unclear how a face detector would perform if it were only trained on the Caltech faces (450 faces). Our model performs slightly worse when clamping 'v' to be the mean face throughout the inference process. Note that we are already initializing v with the mean face.\\n\\nQualitatively, the samples in Fig. 6b do look more like the CMU multipie faces while samples in Fig. 6a look more like faces from the Caltech faces dataset. \\n\\nSection 6.3: It is not a trivial task because the ConvNet takes as its input v and the big image and then predict the next gaze. The ConvNet is learned to shift either left or right depending on what v is. This is not the same as a template matcher with v as the template. We agree that having the 'same-sex' targets would be an interesting experiment.\\n\\nThe threshold was chosen by hand, high enough to have a small number of false positives, and low enough to have reasonable recall (951/1418). This threshold was used to filter out falsely localized gazes on the novel dataset. \\n\\nIt is not a simple 3 layer DBM (at the top of page 4) because x is a function of u. It could be thought of as a 3 layer DBM with high-order multiplicative interactions between u, v, and x.\\n\\nPlease see the comment to Reviewer eb85 regarding the overloaded notation and the contradictory statements about HMC and local minima.\"}", "{\"reply\": \"We thank the reviewer for the thoughtful review, and we look forward to improving\\nthe manuscript in the future version.\\n\\nThe proposed model in this paper is a promising machine learning model because it allows for learning generative models of objects of interest in large images, which has not been done previously as far as we know. Specifically, without attention, it is simply computationally too expensive to learn good generative models based on large images.\\n\\nThe reviewer is correct in pointing out that we did not want 'gaze' to be taken literally. The model is only inspired by [18], while the main focus is to allow efficient learning of generative models of objects of interest given large unlabelled training images.\\n\\nIt is regrettable that p was used twice, we will correct this. The reviewer is also correct that we glossed over the description of the inference procedure in DBNs due to space limitations. We used approximate inference in the DBN by using only the first-layer RBM to infer h1, followed by using the 2nd layer RBM to infer h2. This is the same inference procedure used for the greedy layer-wise stacking of RBMs.\\n\\nWe used a standard ConvNet architecture with C layers followed by max-pooling S layers. First C layer had 16 5x5 filters looking at 72x72 image using ReLU hidden units followed by max-pooling and another round of a C layer and a max-pooling layer. This resulted in 16 filter maps with dimension of 20x20. A separate stream for the smaller canonical view used a C layer with 16 5x5 filters operating on the canonical 24x24 face patches, giving 16 filter maps with dimension of 20x20. The two sets of filter maps are combined by element-wise multiplication. Two fully connected layers followed with 1024 ReLU hidden units, followed by the final output of 4 predictions.\\n\\nIteratively predicting transformations is novel w.r.t. training generative models, especially for converging to the correct pose of the object of interest. \\n\\nHowever, there are methods in literature for shifting to areas of interest in the context of object detection, e.g. [R1].\\n\\nHMC is used for fine-tuning the window position and making our inference probabilistically correct. The bulk of the work is performed by the ConvNet. Using HMC alone for inference in this paper will get stuck in local optima. The sentence 'Resampling the momentum variable ... momentum to jump out of local minima' is regarding the general theory of why momentum variables are resampled in the HMC algorithm. We will clarify this.\\n\\nSection 6.1: yes the full model is run here: The DBN had two layers with 1024 hidden units for the first layer and 200 hidden units for the 2nd layer. We do not know the STOA performance for this dataset, but it is considered an easy dataset for face detection. We want to emphasize that our framework is not aimed at achieving STOA for object detection. Our aim is to allow for learning generative models of objects of interest in large unlabelled images. Depending on what the generative model has in mind in v, no new detector for v needs to be trained. This is demonstrated in the 'Ambiguity' experimental section. Note that u is initialized to be centered with a random jitter of 30 pixels and randomly scaled from 0.5 to 1.5 the size (see footnote 1). \\n\\nWhile images have faces roughly centered, this is not cheating as can be seen from the initial yellow box in figure 4, left panel.\\n\\nThe DBN, modeling the Caltech faces, is finetuned with the additional CMU multipie faces. Learning was performed with greedy layer-wise training using FPCD. Fig. 6 qualitatively shows that additional faces are learned from the CMU dataset.\\n\\nWhen combining two RBMs (in a specific way) you can either have a DBN or a DBM, depending on whether you want to interpret the resulting model as a directed or an undirected model.\", \"figure_4\": \"the artifact comes from mirroring the pixels at the borders to pad the image.\\n\\n[R1] Searching for objects driven by context. Alexe et al. NIPS 2012.\"}", "{\"title\": \"review of Learning generative models with visual attention\", \"review\": \"Summary\\nThis paper proposed a probabilistic framework for learning generative models where the 2D Similarity transformation is a part of the top-down process and a ConvNet is used to initialize the posterior inference. \\nThough this paper is clearly interesting and important for both deep learning and computer vision community, it could potentially be improved. The main problem is that the current submission does not include all of necessary details (please see the below), especially for the experimental section. \\n\\nPros\\n-- well-written and organized\\n-- an unified probabilistic graphical model framework for generation and detection \\n-- interesting experimental results \\nCons\", \"the_current_submission_lacks_of_some_important_details\": \"The details of the ConvNet and its training process;\\n How many steps of Gibbs samples executed in step 3 in fig.3? For a Gaussian RBM trained on face image the mixing of its markov chain could be extremely slow; \\n What do you mean 'training DBN by FPCD' in page 7? Do you mean wake sleep algorithm?\\n How do you determine the threshold for logP(x|u,v)? I believe this threshold could be crucial for learning without gaze labels.\\nThere are not any baseline for comparison in the section of experiments. Which part of the framework is more important? The DBN or the convNet? At least you could try to change DBN to a Gaussian mixture model (and/or change the ConvNet to a randomly guess). \\nIs the Monte Carlo EM algorithm described in section 5 valid? Is there any guarantee? It can be seen in section 6.2 that the E-step could fail to localize faces. How could you prevent the DBN from learning those false faces.\\nTo be honest, I am not convinced that a RBM (or DBN) trained on face images can be such a good generative model that always give higher free energy for non-face images. \\n\\nMinor comments\\nLast paragraph , Page 2: Combining two energy functions of RBMs forms a DBM.\\nFig. 3: What do you mean about the arrows above the ConvNet in step 2 and step 4?\\nSection 6.2: please explain how you do inference in a DBN.\"}", "{\"reply\": \"We thank the reviewer for the thoughtful review, and we look forward to improving\\nthe manuscript in the future version.\\n\\nDue to the limited space, some details were left out. We will try to address them in the next version. The training of ConvNet is standard and did not involve any special 'tricks'. We used SGD with minibatch size of 128 samples. We used a standard ConvNet architecture with C layers followed by max-pooling S layers. First C layer had 16 5x5 filters looking at the 72x72 image using ReLU hidden units followed by max-pooling and another round of a C layer and a max-pooling layer. This resulted in 16 filter maps with dimension of 20x20. A separate stream for the smaller canonical view used a C layer with 16 5x5 filters operating on the canonical 24x24 face patches, giving 16 filter maps with dimension of 20x20. The two sets of filter maps are combined by element-wise multiplication. Two fully connected layers followed with 1024 ReLU hidden units, followed by the final output of 4 predictions. \\n\\nIn step 3 of fig. 3, one alternating Gibbs update is performed. \\n\\nWhile learning a Gaussian RBM model might be slow, step 3 in fig. 3 performs inference after learning. In fact, 50 Gibbs steps would actually adversely affect inference, since the Markov chain would drift to a different face as it samples from the distribution over all of the faces that the GRBM model has learned.\", \"page_7\": \"we used Fast PCD to train each layer of the DBN model in a greedy layer-wise fashion. No finetuning of the entire network was performed.\\n\\nThe threshold was chosen by hand, high enough for a small number of false positives, and low enough to have reasonable recall (951/1418). This threshold was used to filter out falsely localized gazes on the novel dataset. \\n\\nThe ConvNet part of the framework is more important than the generative model. Choosing GMMs instead of DBNs would perform equally well. The novelty of our framework is that it allows for learning generative models of objects of interest in large images.\\n\\nThe EM algorithm in section 5 is valid as long as the approximate posterior is unbiased. We can not guarantee that our localization procedure will always work. The role of the threshold is to not learn on false faces if we fail to localize.\\n\\nLast paragraph of page 7 states that $ log p(x(u)|u,v)$ (a Gaussian distribution) is thresholded, which is a function of $ v $. Essentially, we compare Euclidean distance between a generic face $ v $ and the localized image patch.\", \"addressing_your_minor_comments\": \"\", \"bottom_of_page_2\": \"when combining two RBMs you can either have a DBN or a DBM, depending on if you want to interpret the resulting model as a hybrid directed/undirected (DBN) or a fully undirected model (DBM).\\n\\nIn fig 3, the ConvNet takes two inputs: a small cropped face (which is above it) as well as a larger image (below it) and outputs a prediction that is [x, y, r, s].\\n\\nIn section 6.2, we perform approximate inference in the DBN by using the first-layer RBM to infer h1, followed by using the 2nd layer RBM to infer h2. This is the same inference procedure used for the greedy layer-wise stacking of RBMs.\"}", "{\"title\": \"review of Learning generative models with visual attention\", \"review\": \"The authors present an attentional generative model, inspired by the routing circuits model of Olshausen et al. (ref. [14]). Similar to some earlier work, attentional aspects result from the model essentially being misspecified to only represent a single object, treating everything else in the image as noise implicitly (besides the cited [18], Chikkerur et al, 2009, is another relevant example). The model consists of a Deep Belief Net (DBN) that models individual objects (here: face patches) in a canonical reference frame, and a transformation operation that scales/rotates/translates the object to be positioned in a larger image. Inference iteratively updates both the internal representations and hypothesized positioning in the image, and is based on Hamiltonian Monte Carlo (HMC) as well as a convolutional net (ConvNet), which makes initial guesses.\", \"pros\": \"I find the topic interesting, and the approach original and creative.\", \"cons\": \"The paper suffers from quality and clarity issues. Moreover, this is not primarily a biological model, so the question is whether the proposed approach is promising for the machine learning application. As with other attention models, I find the evidence lacking that attention is really needed here to solve the task or that the approach works better than other, perhaps simpler alternatives.\", \"details\": \"First of all, the paper has a number of typos and grammar issues that should be fixed (three examples right in the abstract: 'enormous sensory stimulus available in the neocortex' is an awkward formulation; 'we describe for [a?] generative learning framework', 'signals from [a] region of interest').\\n\\nI understand that biological attention this is not the primary focus of the paper, but I would have enjoyed a bit more discussion of the notion that attention corresponds to transforming input into an object centered frame, other than citing ref [14]. As far as I know, this is a speculative proposal and it would be nice to discuss whether there is recent evidence or theories supporting it. Similarly, the authors could have explained better how their framework is to be interpreted in terms of biological attention, if at all. For example, they appear to contrast their model to the 'covert' one of [18], and similarly the term 'gaze variables' suggests an interpretation in terms of overt eye movements. But note that the Olshausen model was supposed to model covert processes, i.e. internal routing of information with fixed retinal input. I also take it that we're not supposed to take the 'gaze' interpretation too literally, seeing as arbitrary scaling and rotations are involved...\\n\\nThe description of the model could be made a bit clearer. For example, the full graphical model could be displayed in Fig 1, rather than 'hiding' parts in the black box. {p} appears to be introduced twice (Section 3, paragraphs 3 and 5), and later p is used as momentum variable in HMC. I also get the impression that some parts of the model are not explained perhaps because they further complicate the model. In particular, the figure and detailed description only cover inference in a RBM. However, a DBN is what is actually used. Presumably the the hybrid directed/undirected DBN further complicates inference? \\n\\nSimilarly, the architecture and training parameters of the ConvNet are not described at all, even though the latter seems to be what does the main work when it comes to localizing the faces. Speaking of which, I found iteratively predicting the transformations with the ConvNet interesting--it would be good if the authors could comment on whether this is a novel contribution or whether there are related approaches. I'm less convinced by the performance of the HMC, which mostly seems to fine-tune the window position. Note that the authors motivate the HMC in Section 3 by writing that it helps 'with jumping out of local maximum' [sic], but then justify the ConvNet in Section 4 by writing that HMC tends to get stuck in local optima...\\n\\nAlso Section 6.1, is the full model run (including the DBN) here? If so, what are the model parameters? The detection performance is only compared to two, presumably baseline methods. What is state-of-the-art on this dataset? Also, footnote 1 says that u was initialized to be centered. Depending on the dataset, isn't that potentially cheating?\\n\\nSection 6.2: the authors state that the novelty of their approach is demonstrated here. The authors essentially first train their model with labels, and then use this first model as a face detector on a second, unlabeled dataset to localize the faces and train a second DBN. Here it was not clear to me whether they train the second DBN from scratch or merely fine-tune the first DBN pretrained with labels (they say they train the second DBN with FPCD. FPCD is for RBMs, so are they referring to layer-wise pretraining?). Either way, the main issue here is that the authors don't actually quantify the performance of the second model, so it is not clear if anything is gained with this approach, over just taking the first model that was only trained on the labeled data.\\n\\nLastly, I found Section 6.2, 'Inference with ambiguity', the most interesting bit, as it actually demonstrates the interaction between the canonical face model and the localization. Unfortunately, this part is very short. Perhaps the authors can expand this in future work.\", \"leftovers\": [\"Section 2: 'If a second level RBM is used to model the activities of the hidden units of the first layer GRBM, we can combine their energy functions to form a Deep Belief Net (DBN) [2]'. Doesn't simply combining the energies give a deep Boltzmann machine rather than a DBN?\", \"Section 4.1: 'the spatial frequency of the natural image signals can form many local minima': this should be expressed better.\", \"Section 4.1: '(e.g. 72\\u00d772) and v (e.g. 24\\u00d724)' here and elsewhere: why the 'e.g.'? Just say that this is what you are using.\", \"Figure 3: the caption should clarify that this is not the full inference process, only the initial part.\", \"Figure 4: there seem to be artifacts in the images?\", \"Figure 5: the fonts are too small to be readable.\"]}", "{\"review\": \"If visual attention with deep nets interests you, see also:\\n\\nLearning where to attend with deep architectures for image tracking\\nMisha Denil\\u201a Loris Bazzani\\u201a Hugo Larochelle and Nando de Freitas\\nIn Neural Computation. Vol. 24. No. 8. Pages 2151\\u20132184. 2012.\\nDOI (10.1162/NECO_a_00312)\"}" ] }
qqUMpzcNswxen
Deep Convolutional Ranking for Multilabel Image Annotation
[ "Sergey Ioffe", "Alexander Toshev", "Yangqing Jia", "Thomas Leung", "Yunchao Gong" ]
Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, the recent deep convolutional feature shows potentials to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with an approximate top-$k$ ranking objective function, as such objectives naturally fit the multilabel tagging problem. Our experiments on the publicly available NUS-WIDE dataset outperforms the conventional visual features by about $10%$, obtaining the best reported performance in the literature.
[ "work", "conventional visual features", "deep convolutional ranking", "important challenges", "computer vision", "many", "applications", "multilabel annotation" ]
submitted, no decision
https://openreview.net/pdf?id=qqUMpzcNswxen
https://openreview.net/forum?id=qqUMpzcNswxen
ICLR.cc/2014/conference
2014
{ "note_id": [ "2oaEotjZYbogy", "H1MIH9BzIJ1qm", "kkIykb7My3s0n", "nqJonBRKZmikW", "BHP9Bpdr8SHE-", "ddxgh_GILedj-", "U_9J_msdzlEuM" ], "note_type": [ "comment", "review", "review", "review", "comment", "comment", "review" ], "note_created": [ 1392622080000, 1391404320000, 1392110100000, 1391404260000, 1392621300000, 1392621780000, 1390308720000 ], "note_signatures": [ [ "Yunchao Gong" ], [ "anonymous reviewer 3761" ], [ "anonymous reviewer 2486" ], [ "anonymous reviewer 3761" ], [ "Yunchao Gong" ], [ "Yunchao Gong" ], [ "anonymous reviewer 0cae" ] ], "structured_content_str": [ "{\"reply\": \"Thanks for the comments.\\n\\n'Comment 1 about tag dictionary size'\\n\\nThe NUS-WIDE dataset is the largest publicly available annotated multilabel dataset we have access to. We agree with the reviewer that the power of Wsabie is not fully explored for this dictionary size, however, our goal is to show that the weighted ranking formulation can effectively improve multilabel annotation accuracy. The reason we use WARP is because it is easy to implement, and potentially scales well to large dictionary size.\\n\\n\\n'Comment 2 about reuse ImageNet model'\\n\\nAs mentioned in our response to reviewer 2, we have tried to initialize the model with ImageNet pretrained model, and have further obtained around 2% improvement for all methods. However, our goal is to evaluate which loss is the best for multilabel prediction problems, so we directly trained the model from scratch to provide the cleanest experimental setting.\"}", "{\"review\": [\"Are there some labels more important than others, or shouldn\\u2019t we employ taxonomic distances ?\", \"How make model to decide on number of output labels ?\", \"It would be nice to have experiments comparing it to the network pretrained on imagenet.\"]}", "{\"title\": \"review of Deep Convolutional Ranking for Multilabel Image Annotation\", \"review\": \"*Summary*\\nThis paper proposes to use convolutional networks for image annotation. Great care is given to the selection of an appropriate loss function as well as the comparison with reasonable baselines over the NUS/Flickr dataset. The paper reads well, gives enough context and references to related work. It\\nreports improvement with respect to the state of the art. In my opinion, this is a good paper, with the only drawback that the evaluation is conducted over a single dataset, with a vocabulary of only 80\\n tags, which is small compared to realistic application.\\n\\n *Detailed review*\\n I would like to clarify only two points regarding (i) small tag vocabulary in NUS, (ii) relationship with imageNET classification. \\n\\n (i) the vocabulary of NUS/Flickr is only 80 different tags. This is very small compare to web annotation or even personal photo gallery annotation. In particular, the fact that your network has 80 outputs make the evaluation of the output score of every tag for every forward/backward step very\\n inexpensive (compared to evaluating the rest of the network). This is very different from the initial conditions in which the WARP loss was introduced. Loss functions which does not rely on sampling can perfectly be used and might be better. I notably think at T. Joachims, A Support Vector Method\\n for Multivariate Performance Measures, Proceedings of the International Conference on Machine Learning (ICML), 2005 or Ranking with ordered weighted pairwise classification from Usunier et al 2009. More fundamentally, I feel that focussing on 80 tags hides most of the interesting challenges in real tasks: reasonable 10k vocabularies implies greater perplexity and therefore require greater performance for the CNN. They also suggest giving greater importance on tag coocurences and language modeling to understand unlikely predictions like ocean and lake tag in the same image from your example.\\n\\n (ii) I appreciate that you highlight the difference between annotation and classification, and that you want a model trained from scratch for fair comparisons (Section 2.1). However, the CNN trained over ImageNET of [20] or subsequent work has spurred hopes for a universal vision machine. If large\\n CNNs trained over 1k and 20k imagenet were available to you, it might be interesting to evaluate how a NUS model initialized from those would perform. This would be an additional result which would not replace the network trained from scratch but rather analyze the reusability of the\\n imageNET network and give perspective on the importance of the imageNET breakthrough.\\n\\n *Comments along the text*\\n Page 2. 'parametric model might not be sufficient to capture the complex distribution of the data' this sentence should be removed. Parametric model can model complex distribution for non linear problems. Use a different wording to introduce that nearest neighbor approaches are competitive.\\n Page 3. 'staircase weight-decay' I am not familiar with this name, which is rather explicit though. You might want to sprincke references over neural net specific terms to allow other ML and core vision people to read your paper. E.g. references after momentum, asynchronous SGD, staircase weight\\n decay might help. \\n Page 3 'posterior probability of an image x_i and class j' the wording is wrong, it should read posterior of class j given image x_i.\\n Page 5 'weight kNN' should read 'weighted kNN'\"}", "{\"title\": \"review of Deep Convolutional Ranking for Multilabel Image Annotation\", \"review\": [\"A brief summary of the paper's contributions, in the context of prior work.\", \"Paper considers several loss functions for multiclass label annotation.\", \"An assessment of novelty and quality.\", \"They have done good job, and ran experiments on the proper, large scale, however work is not very novel (or creative).\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": [\"Gives reasonable advice, which loss function use for multi class image annotation.\"], \"cons\": \"\"}", "{\"reply\": \"Thank you for the comments!\\n\\nSince most previous work on this dataset use a smaller subset of this whole dataset (such as NUS-light), or use their own training/testing split, directly comparing the numbers seem to be hard. However, we included a baseline recognition system [11] which is published in IJCV 2013. This baseline is based on a combination of 9 different visual features, and can be considered to be a quite strong baseline.\"}", "{\"reply\": \"'Are there some labels more important than others, or shouldn\\u2019t we employ taxonomic distances'\\n\\nThe standard evaluation protocol described in [25] is used in our work, and we have evaluated different methods by 5 different protocols (which is more comprehensive than [25]). The overall precision and overall recall emphasis on frequent tags, and per-class recall and per-class precision emphasis on infrequent tags. So we believe the evaluation is thorough. The point raised by the reviewer is definitely interesting, however we believe it is orthogonal to this paper.\\n\\n\\n'How make model to decide on number of output labels ?'\\n\\nWe follow the standard practice in most previous works [25,14,26] to fix the number of output labels for each image to 3 or 5.\\n\\n\\n 'It would be nice to have experiments comparing it to the network pretrained on imagenet.'\\n\\nWe have tested it before and found using pretrained weights can further improve the performance for around 2% for all methods. However, our goal is to perform a clear comparison between different loss functions for multilabel annotation, and want to use the simplest experimental setting, so we did not includ the pretrained results.\"}", "{\"title\": \"review of Deep Convolutional Ranking for Multilabel Image Annotation\", \"review\": \"This paper proposes to use deep convolutional neural network (DCNN)combined with ranking training criteria to attack the multi-label image annotation problem. DCNN is now widely used in image classification (annotation) problems. Applying it to multi-label image annotation problem is a natural extension of prior arts. The combination of DCNN with the ranking training criteria to solve the multi-label annotation problem, however, is new and is the main contribution of the paper.\\n\\nThe authors evaluated the proposed approach on the NUS-WIDE dataset, which is considered the largest multi-label image dataset available. They compared the proposed approach with baselines that use manually designed image features and showed that the proposed approach outperforms the baseline by 10%. They claim that this is mostly due to the features learned from DCNN. They also compared several different ranking criteria and demonstrated that the weighted Approximate Ranking (WARP) criterion performs the best.\\n\\nWhile their results are not surprising given the recent success of DCNN on image classification tasks, this paper does show a novel usage of the DCNN on the multi-label image annotation problem. \\n\\nIf the paper is to be improved, I would suggest to include published results on the same task as part of the baselines. This allows readers to understand better the position of the proposed approach.\"}" ] }
NNP_NfOK_ENK4
An Architecture for Distinguishing between Predictors and Inhibitors in Reinforcement Learning
[ "Patrick C. Connor", "Thomas P. Trappenberg" ]
Reinforcement learning treats each input, feature, or stimulus as having a positive or negative reward value. Some stimuli, however, negate or inhibit the values of certain other predictors (excitors) when presented with them, but are otherwise neutral. We show that both linear and non-linear value-function approximators assign inhibitory features a strong value with the opposite valence of the predictor it inhibits (i.e., inhibitor= -excitor). In one circumstance, this gives a correct prediction (i.e., excitor + inhibitor = neutral outcome). Importantly, however, value-function approximators incorrectly predict that when the inhibitor is presented alone, a negative or oppositely valenced outcome will follow whereas the inhibitor alone is actually followed by a neutral outcome. Essentially, we show that having reward value as a direct predictive target can make inhibitors indistinguishable from excitors that predict the oppositely valenced outcome. We show that this problem can be easily avoided if the reinforcement learning problem is broken into 1) a supervised learning module that predicts the positive appearance of primary reinforcements and 2) a reinforcement learning module which sums their agent-defined values.
[ "predictors", "reinforcement", "inhibitor", "architecture", "inhibitors", "values", "excitors", "approximators", "neutral outcome", "valenced outcome" ]
submitted, no decision
https://openreview.net/pdf?id=NNP_NfOK_ENK4
https://openreview.net/forum?id=NNP_NfOK_ENK4
ICLR.cc/2014/conference
2014
{ "note_id": [ "nnW2J9gBITnGA", "UUcR5oVeRuZb4", "TvLXT3OmycZnz", "lQb2ljSA2gMmE", "yyQCyIciZgy_J", "9hrpQySeBbhGc", "kkTqd3NaFyZaP", "YTciTSQEOoj4l", "7hdD9NYlqQhbJ", "bbRiuo3neGbJT" ], "note_type": [ "review", "comment", "comment", "review", "review", "comment", "review", "review", "review", "review" ], "note_created": [ 1392151320000, 1394239920000, 1394239800000, 1389919920000, 1390610280000, 1394240040000, 1391818740000, 1394239680000, 1391931300000, 1391444040000 ], "note_signatures": [ [ "Patrick Connor" ], [ "Patrick Connor" ], [ "Patrick Connor" ], [ "David Krueger" ], [ "Patrick Connor" ], [ "Patrick Connor" ], [ "anonymous reviewer 1a31" ], [ "Patrick Connor" ], [ "anonymous reviewer 9fb5" ], [ "anonymous reviewer 9906" ] ], "structured_content_str": [ "{\"review\": \"A sincere thank you to all of the anonymous reviewers, who have collectively raised several significant issues. While, I believe that some of these issues could be addressed with little impact to the paper, a couple of them would require substantial enough additions that we have chosen to withdraw the paper at this time. Your feedback has been very insightful and is appreciated.\\n\\nTake care,\\n - Patrick Connor (and Thomas Trappenberg)\"}", "{\"reply\": \"Thank you for your feedback. In response to your comments, we offer the following:\\n 1) I am unaware of the similar cases you speak of, where positive and negative contributions are divided and used with standard predictors. Unfortunately, I have likely not responded in time to request references, though if possible, a reference or two would be much appreciated. We are enhancing our toy simulation based on your feedback and that of another reviewer (see response to the third anonymous review below). Based on a suggestion from another reviewer, we will be replacing the MLE of a rectified linear function with a simple neuron having a logistic function output. This will avoid making the derivation, which might distract readers from the main point, which is to highlight a problem we expect reinforcement learners will begin experiencing when providing high-level object representations of the world as input to a VFA where both significantly rewarding and punishing outcomes occur.\\n 2) Your feedback on blocking when using different USs was quite insightful. Based on the reference you provided I also discovered a few other relevant papers. It would seem that there is significant empirical evidence (but see Blaisdell, Denniston, and Miller, L&M 28, 268-279 from 1997) to indeed suggest that changing the US in the second phase of a blocking procedure does not eliminate blocking. The implication is that breaking up a reward prediction into separate predictions of individual rewarding outcomes is not the right thing to do. However, in all of the experiments I found, the different USs within each are of the same valence (either both rewards or both punishments) to avoid counterconditioning effects. A solution that better suits the evidence in these experiments and still makes our point is to break up the reward value prediction into two separate predictors instead of many: a predictor for all rewards and a predictor for all punishments, where each predictor is rectified. This would also seem a little easier to justify biologically, since we no longer need error signals for predictions of individual outcomes but rather only for two -- one for rewards and one for punishments. In short, we plan to change our two-stage approach to group rewarding and punishing outcomes in the first stage instead of predicting individual outcomes and merely subtracting the punishment prediction from the reward prediction in the second stage. This prevents our ability to allow satiety to directly influence the output value, however, and means that we will have to remove such discussion.\"}", "{\"reply\": \"Thank you for your feedback. We welcome the requested references and offer responses below to the specific issues raised in the review. Briefly, we did not mean to focus on our specific approach to predicting future states. Rather, we tried to show a specific problem faced by conventional value-function approximators (which learn to predict reward value rather than state) and how it can be avoided by incorporating state prediction in a straightforward way. The simplicity of our simulations is deliberate, to clearly demonstrate the problem, showing that even non-linear value-function approximators cannot avoid it entirely. From two other reviews of this paper, we have chosen to adjust our model away from state-prediction in general, more clearly demonstrating our main point, which I will discuss further below. Nevertheless, I would like to address your comments specifically.\\n\\nFirst, it seems appropriate to briefly describe the contents of several papers mentioned in the review and how they relate to the present work. Unfortunately, I was not able to retrieve the two Schmidhuber papers that refer to the 'vector-valued adaptive critic' mentioned in the review. I was, however, kindly pointed to another paper by Schmidhuber discussing the subject:\\n\\nJ. Schmidhuber. Networks adjusting networks. In J. Kindermann and A. Linden, editors, Proceedings of `Distributed Adaptive Neural Information Processing', St.Augustin, 24.-25.5. 1989, pages 197-208. Oldenbourg, 1990. Extended version: TR FKI-125-90 (revised), Institut f\\u00fcr Informatik, TUM.\\n\\nThe typical critic (i.e., a value-function approximator) learns to predict a single reward value. In this Schmidhuber paper, a critic is presented that learns to predict multiple reward/punishment types and the associated prediction errors are backpropagated through a controller module to improve action selection. Applied to a pole-balancing task, the critic's predictive targets reflect different types of pain, or ways in which the task can fail. Schmidhuber found that using these multiple predictive targets allowed the controller to balance the pole with substantially fewer training episodes. In Nguyen and Widrow (1989), the world state is represented by six variables reflecting position and orientation information of a truck backing up. In their paper, a model is trained to predict the next state values with the ultimate goal of informing a steering controller that will minimize the final error (difference between the ideal final position and orientation of the truck and its location at the end of a series of backup cycles). In Schmidhuber and Huber (1991), the state is represented by pixel intensity sums organized to coarsely emulate a fovea. Here, a model is trained to predict the next state, that is, the pixel sums after a simulated eye movement. Also, in this paper, the model is used to inform a controller and help direct movement effectively. How do these three papers relate to the present work? Neural networks such as those used in these papers to predict the future state could also be used in our approach as the first stage of our two-stage architecture, albeit with some minor changes. In particular, the vector-valued adaptive critic learns to predict the same kinds of targets as our first stage.\\n\\nBut, the main point of the paper is not really about using a particular approach to predicting future states. To clarify this, it may be helpful to highlight that our work assumes that the world state provided as input is represented as a vector of features, where each feature represents the degree of presence, or salience of a particular real-world object or stimulus. This representation comes from the idea that cortical neurons become active in the presence of specific stimuli and to a lesser degree for others within a neighbourhood of similarity. It also suits the way classical conditioning modeling conventionally represents stimuli. A crucial aspect of this representation is that the features have a positive value. This is a little different from the value-vector adaptive critic in the first paper, which had linear output units (no logistic activation function). The importance of this becomes apparent when the reinforcement learning task, as in our simulations involves predicting both rewarding and punishing types of targets.\\n\\nThe primary novelty or contribution of our work is that the two-stage architecture which incorporates a state-prediction instead of a reward-prediction avoids a problem value-function approximators have in the world or state representation we have assumed. Here it is: the conventional 'state in, value out' value-function approximator or critic can mistake a feature that is strongly predictive of the omission of a reward for a punishment when it is presented alone, whereas the omission of an unexpected reward is really indicative of a neutral outcome (see paper text and previous comments for details). Similarly, a feature predictive of omission of punishment can be mistaken to predict a rewarding outcome. In contrast to value prediction, the architecture in the paper (in its present form) learns to predict specific rewarding or punishing outcomes (a portion of the total state). The difference between this and the vector-valued adaptive critic is that our approach truncates predictions below zero, recognizing that rewarding or punishing outcomes may occur or not but never anti-occur (e.g., some food or no food, but not less than zero food). Perhaps the vector-valued adaptive critic could be given logistic output units instead of linear ones to provide a similar truncation, but even so would only serve to function similarly. The bottom line is that our work demonstrates with simple models that value-function approximators have this problem and important architectural features (truncated prediction of motivational outcomes rather than reward-prediction) that can resolve it. \\n\\nI hope this clarifies the focus of the paper. Although we do presently introduce a linear state predictor that truncates negative predictions (soon to be replaced by a simple summation node with logistic activation function), it is mostly meant to assist in getting to our main point and could be replaced by other state-prediction techniques (and should be replaced if highly non-linear relationships are to be learned). Again, the main point is to highlight a problem seen in straightforward value-function approximation and offer a simple solution that involves truncated state prediction rather than reward prediction. Our simplistic simulations (soon to be made more general) were designed to demonstrate this clearly.\"}", "{\"review\": \"I believe this paper mis-characterizes the Reinforcement Learning (RL) problem and the problem it purports to solve is, in fact, not a problem at all in a general reinforcement learning context.\\n\\nIn particular, the first sentence of the abstract states: \\n'Reinforcement learning treats each input, feature, or stimulus as having a positive or negative reward value', but this is simply not true in general for reinforcement learning.\\n\\nIn the most general setting, the reward at each step t+1 (r_{t+1}) is calculated as a function of the state at time t, the state at time t+1, and the action at time t. So r_{t+1} = f(s_t,a_t,a_{t+1}). Even if we specify the problem somewhat so that reward is only a function of the state (r_{t+1} = f(s_{t+1}), this is still a general enough framework to incorporate arbitrary interactions between different variables that compose the state. \\n\\nIn contrast, this paper assumes linear interactions between at least some subset of variables that specify the state.\\n\\nSuccinctly, it is not each input, feature, or stimulus that has a positive or negative reward, rather, it is the totality of the environment's state that has an associated reward. \\n\\nI am not aware of common practices in RL, so it may be that a linearity assumption such as the authors have made is commonly used in practice. And it may be that their approach to weakening this assumption could be useful in practice, or theoretically. But I find the evidence presented extremely unconvincing on this point. Primarily this is because there is no presentation of other approaches to weakening the imposed linearity constraint.\"}", "{\"review\": \"Hi, David. It's wonderful to have this venue for receiving feedback and being able to discuss our work. I think I understand your feedback and that some of the early statements of our paper have unfortunately distracted from its main theme. We agree with you that Reinforcement Learning in general is not confined to attributing a positive or negative value to each contributing feature of a state, such that the state's apparent value is always the linear sum of their effects. Rather, it encompasses more than this. We did not intend the received meaning, though we can see the value of making minor textual changes to clear this up.\\n\\nThe problem we find, though, is that given data which suggests linear contributions of individual features (the 'partial' dataset in our simulations), value function approximators (VFAs), whether linear (LR-P results in the paper) or non-linear (SV-P results), treat the features as linear, which comes as no surprise. Essentially we want to highlight a real world case where making this 'linear assumption' is actually inappropriate, since it will add to prediction errors. \\n\\nWhereas features/predictors may be associated with either reward, punishment or neither (whether linearly as individuals or in non-linear combinations as in XOR), features can also be associated with cancelling these otherwise predicted outcomes. That is, some features can predict outcomes and others can cancel those predictions. We show that linear and non-linear VFAs will see canceling features as having the opposite effect (say, value=-1) on the state-value as do the features which predict the associated outcomes. Then, when a predictor (value=1) and a cancelling feature (value=-1) are presented together, a neutral outcome (1-1=0) is expected. But what happens when these canceling features are presented alone, having no prediction to cancel? According to the VFAs, they are not neutral but still have their opposing effect if they have not been trained on this case (i.e., value = 0 - 1 = -1). This mistaken sense of opposing value adds to the prediction error. Ideally, one would have the 'full' dataset, which tells us the non-linear nature of such cancelling features, but we are unlikely to have it all (or most?) of the time.\\n\\nWe bring all of this up in the paper because we see a simple way to avoid the problem when only the 'partial' (i.e., 'linear') dataset is available. We change the representation or the prediction target from reward-value to the outcomes of interest (e.g. food, shock, etc.). Then, when a canceling feature of an outcome is presented alone, it's opposing (-1) contribution can be truncated (i.e., -1 -> 0) since the outcome will never be less than 0 (e.g., there will never be less than zero shock).\\n\\nIt is true that we do not show evidence for other ways to correct for or weaken the 'linear assumption' here, because we have not seen any previous acknowledgement of this issue, and thus no attempts at solving it as of yet. I wonder if this might be because having enough data (i.e., the 'full' dataset) makes the problem moot, just as having enough supervised data (i.e., an exhaustive mapping) can make the use of a deep learning representation moot, for example. Instead, I think the most important evidence we show is that even non-linear VFAs are subject to treating the partial, linear data in a linear fashion, which is inappropriate in this case.\\n\\nIn summary, we recognize the need to do some rephrasing in the early part of the paper regarding the linearity of feature contributions to state-value. Yet, we focus on showing that when the data does suggest linear contributions, they sometimes represent a specific non-linear case that VFAs fail to recognize but that can be easily accommodated without error by changing the prediction target from reward-value to outcome. We are the first to acknowledge this as far as we know, and thus offer the only existing way to avoid accruing the associated VFA prediction errors. \\n\\nTake care,\\n - Patrick\"}", "{\"reply\": [\"Thank you for your feedback. In response to your suggestions, we intend to make the following changes:\", \"reworking the abstract to clarify the problem and our contribution.\", \"expanding the number of non-linear function approximators used in simulation to demonstrate the generality of the problem (e.g., MLP and CART)\", \"the idea to use a neural network with a logistic output is a good one and we will incorporate this rather than derive the rLMS model, which distracts from the main point made in the paper\", \"also, the idea to simply break the reward prediction into positive and negative parts rather than predict all of the different rewarding and punishing outcomes makes a great deal of sense -- it will still eliminate the confusion (e.g., between reward predictors and omission of punishment predictors) and concurrently address one of the classical conditioning related concerns raised in another review. This will move the paper away from the state-prediction focus, though some of this will still be relevant to discuss and relate to what is being done.\", \"we will enhance the simulations to contain real values instead of binary values and vary the percentage of data points in the training set which expose the non-linear nature of the problem. This makes the simulation a little more general, allowing us more data points to work with and the ability to do a few stats. This will also give a graph showing how the different approaches perform as we move from the ``partial'' (linear relationships only) training data set toward a ``full'' data set.\"]}", "{\"title\": \"review of An Architecture for Distinguishing between Predictors and Inhibitors in Reinforcement Learning\", \"review\": \"This paper proposes to break value prediction into prediction of a specific outcome (e.g., food) and a value for that outcome allows reinforcement learning to make the correct predictions for inhibitors -- as opposed to standard RL which predicts negative value for a reward inhibitor. In practice, this is implemented by separately predicting positive (reward) and negative (punishment) values using rectified predictors. Tests on a toy problem with 16 training points show that this setting indeed avoids the usual prediction of opposite valence for an inhibitor.\\n\\nInhibitors are a difficulty for RL and present an interesting puzzle, however this paper fails to make a good case:\\n\\n1) There is no real technical contribution here: separating positive from negative values while using standard predictors has been done often (e.g. in computer vision); the toy test is exceedingly simple.\\nThe bulk of the technical part of the paper is a lengthy explanation of simple maximum likelihood estimation of a thresholded linear function.\\n\\n2) The paper is motivated as 'tak[ing] a cue from biological systems', but there are problems with this too:\\n- 'we train function approximators to predict specific primary reinforcements (e.g., food, charging station, etc.).' This makes the job of an inhibitor easier indeed, but predicts that a function approximator should not be able to produce trans-reinforcer blocking, where a predictor trained on one specific primary reinforcement (e.g., shock) blocks conditioning to another reinforcement (e.g., loud noise). See:\\nBakal, C. W., Johnson, R. D. & Rescorla, R. A. The effect of change in US quality on the blocking effect. Pavlov. J. Biol. Sci. 9, 97\\u2013103 (1974).\\n- 'we might take a cue from biological systems, where it seems that the dynamic revaluation of a primary reinforcer due to satiety is second nature': while this is true of primary reinforcers, this is not always true of secondary reinforcers, that RL needs to train as well. There is a lot of literarture on sensitivity to outcome devaluation, e.g.:\\u00a0\\nMotivational control of goal-directed action\\nAnthony Dickinson, Bernard Balleine\\nAnimal Learning & Behavior\\nMarch 1994, Volume 22, Issue 1, pp 1-18\\n\\nIn its current form, the paper does not seem to offer enough for acceptance, in terms of either technical novelty or insight into natural behavior.\"}", "{\"review\": \"The reviewers brought up very good comments so that we previously considered withdrawing the paper to have more time to work on it. We now feel we that we are able to address the reviewer's concerns, especially given your recent indication of acceptance. Therefore, we now retract our statement of withdrawal. Below we answer each of the reviews and will soon incorporate the associated changes into our paper. We apologize for the lateness of our responses, but have only come to this decision recently.\", \"to_summarize_the_most_significant_proposed_changes\": [\"Adjusting the abstract and opening paragraphs to avoid implying that reinforcement learning does something in general that it does not do (see David's review and my response above)\", \"Replace the partial state-prediction of the first stage of our architecture with a reward-prediction and punishment prediction learners, which are rectified.\", \"Enhancing the simulations to include real values instead of binary only and to vary the number of data points that expose the non-linear nature of the problem. Also, we will add two non-linear VFAs (MLP and CART) to show that the problem with VFAs is not specific to support vector regression.\", \"Strengthening the discussion by providing greater distinction between the present work and previous work (as noted in the first anonymous review) and address the classical conditioning comments of the second anonymous review.\", \"This represents a significant change in the substance of the paper, but the ultimate conclusions do not change. The changes will result in focusing more squarely on our main point and strengthening the support for it through more generalized simulations.\", \"Take care,\", \"Patrick Connor (and Thomas Trappenberg)\"]}", "{\"title\": \"review of An Architecture for Distinguishing between Predictors and Inhibitors in Reinforcement Learning\", \"review\": \"Summary\\nThis paper proposes a value function approximation architecture which explicitly models inhibition effects for reinforcement learning. In this context, inhibition refers to a stimulus that eliminates a reward when presented along with a stimulus which usually produces a reward. For example, the stimulus pair \\u2018NP\\u2019 does not produce a reward whereas \\u2018P\\u2019 alone does, and \\u2018N\\u2019 alone produces a reward of 0. The authors address the issue of value function approximators not correctly modeling \\u2018N\\u2019 in isolation having reward 0, as opposed to a negative reward. The authors propose decomposing the value approximation problem into two halves, one to estimate negative rewards and another to estimate positive rewards. The outputs of these functions are then combined to form a final value estimate. The authors derive a linear regression function approximator which rectifies its output to properly fit into the proposed two-stage scheme. The method is evaluated on a toy dataset of 16 examples which illustrates the problem.\\n\\nReview\\nThe abstract is very hard to follow. Please rewrite it to more clearly describe the problem and your contribution.\\n\\nThe problem of confusing negative reward stimuli with inhibitors does not seem inherent to the reinforcement learning problem. Instead it seems to be a property of the chosen value function approximation algorithm. The authors should discuss, and perhaps test, whether a more expressive class of value function approximators exhibit this property. Given the conference, a neural network function approximator could be appropriate. Additionally, why not decompose the estimator into positive / negative reward estimates but use neural nets to approximate each half? They can naturally output rectified rewards without additional derivation. \\n\\nThe derivation equations are easy to follow, but their motivation is unclear. Please add more text discussing why it is necessary to make such derivations instead of choosing a different estimator\\n\\nThe experimental evaluation leaves much to be desired. The toy example is nice as a control experiment, but far from comprehensive. I would like to see a larger problem in which your approach performs better or discovers inhibition structure other value function approximators can not discover.\\n\\nKey points\\n+ Proposed approach of decomposing value function approximation seems interesting and could lead to approximating functions which are easier to understand or debug\\n- Unclear whether a more expressive class of value function approximators would fail to learn about inhibition stimuli automatically\\n- Toy dataset evaluation\\n- Writing is okay but does not clearly introduce the problem and motivate the approach\"}", "{\"title\": \"review of An Architecture for Distinguishing between Predictors and Inhibitors in Reinforcement Learning\", \"review\": \"An Architecture for Distinguishing between Predictors and Inhibitors in Reinforcement Learning\\nPatrick C. Connor, Thomas P. Trappenberg\", \"summary\": \"The authors discuss benefits of reinforcement learners with a supervised module that predicts primary reinforcements and a reinforcement learner which sums their values.\\n\\nQuite a few references to relevant previous work seem to be missing, e.g., see below.\", \"p_2\": \"'The notion of predicting future observations or states is not new to RL [9, 10].'\\n\\nThese references are of 2004 and 2005. Two much older references from 1990 on this:\\n\\nR. S. Sutton. First Results with DYNA, an Integrated Architecture for Learning, Planning and Reacting. Proceedings of the AAAI Spring Symposium on Planning in Uncertain, Unpredictable, or Changing Environments, 1990.\\n\\nJ. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Proc. IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 2, pages 253-258, 1990.\", \"p_6\": \"'the architecture we propose is not trained using a scalar reward prediction error, but rather a vector of state feature-speci\\ufb01c prediction errors'\", \"so_it_is_like_the_vector_valued_adaptive_critic_for_multi_dimensional_reinforcement_signals_of_the_old_system_from_1990_below___please_discuss_the_differences\": \"J. Schmidhuber. Recurrent networks adjusted by adaptive critics. In Proc. IEEE/INNS International Joint Conference on Neural Networks, Washington, D. C., volume 1, pages 719\\u2013722, 1990\\n\\nJ. Schmidhuber. Additional remarks on G. Lukes\\u2019 review of Schmidhuber\\u2019s paper \\u2018Recurrent networks adjusted by adaptive critics\\u2019. Neural Network Reviews, 4(1), 1990.\\n\\nBelow other old predictors of state feature-speci\\ufb01c prediction errors. RL is based on training a recurrent system that combines the world model and the actor. Which are the relative advantages or drawbacks of the approach of the authors?:\\n\\nN. Nguyen and B. Widrow. The truck backer-upper: An example of self learning in neural networks. Proceedings of the International Joint Conference on Neural Networks, 357-363, IEEE Press, Piscataway, NU, 1989\\n\\nJ. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(1 & 2):135-141, 1991\", \"experiments\": \"this is a very simple RL task.\", \"general_recommendation\": \"It is not quite clear to this reviewer how this work goes beyond the previous work from 20 years ago mentioned above. At the very least, the authors should make the differences very clear.\"}" ] }
6rEnMF1okeiBO
Low-Rank Approximations for Conditional Feedforward Computation in Deep Neural Networks
[ "Andrew Davis", "Itamar Arel" ]
Scalability properties of deep neural networks raise key research questions, particularly as the problems considered become larger and more challenging. This paper expands on the idea of conditional computation introduced by Bengio, et. al., where the nodes of a deep network are augmented by a set of gating units that determine when a node should be calculated. By factorizing the weight matrix into a low-rank approximation, an estimation of the sign of the pre-nonlinearity activation can be efficiently obtained. For networks using rectified-linear hidden units, this implies that the computation of a hidden unit with an estimated negative pre-nonlinearity can be ommitted altogether, as its value will become zero when nonlinearity is applied. For sparse neural networks, this can result in considerable speed gains. Experimental results using the MNIST and SVHN data sets with a fully-connected deep neural network demonstrate the performance robustness of the proposed scheme with respect to the error introduced by the conditional computation process.
[ "approximations", "conditional feedforward computation", "deep neural networks", "key research questions", "problems", "become larger", "challenging", "idea", "conditional computation" ]
submitted, no decision
https://openreview.net/pdf?id=6rEnMF1okeiBO
https://openreview.net/forum?id=6rEnMF1okeiBO
ICLR.cc/2014/conference
2014
{ "note_id": [ "5LNqFhzCG2FPM", "_HtzHMbQzfHQd", "XahgXeP_L0ZYU", "_AcWXVswj-XvN", "wMWZGo27UK-XQ" ], "note_type": [ "review", "review", "comment", "comment", "review" ], "note_created": [ 1392081000000, 1390272720000, 1393472940000, 1393544220000, 1390968540000 ], "note_signatures": [ [ "anonymous reviewer b1e7" ], [ "anonymous reviewer 6a1b" ], [ "anonymous reviewer 6a1b" ], [ "Andrew Davis" ], [ "Andrew Davis" ] ], "structured_content_str": [ "{\"title\": \"review of Low-Rank Approximations for Conditional Feedforward Computation in Deep Neural Networks\", \"review\": \"The authors investigate a proposed method of speeding up computation in a feed-forward ReLU network by predicting the sign of the presynaptic activations with a low-rank predictor obtained by SVD on the weights.\", \"novelty\": \"medium\", \"quality\": \"low\", \"pros\": [\"Investigates a problem of substantial interest to the community, that of scaling up neural nets through what Yoshua Bengio has dubbed \\u201cconditional computation\\u201d\", \"Experimental procedures are well documented, software cited\"], \"cons\": [\"misinterprets the goals articulated by Bengio on conditional computation: namely, to explicitly learn which computations to do/which parts of an architecture to activate, rather than simply identify ways of speeding up existing architectures through predictive computation. In that respect it could still be an interesting line of inquiry but not really \\u201cconditional computation\\u201d related.\", \"Provides no empirical benchmarking: I strongly suspect an empirical investigation of the proposed speedup would reveal that the cache unfriendliness of the non-uniform memory access would result in a significant slowdown even for large, relatively sparse network.\", \"The commentary contains several instances of speculation that is not qualified as such (see below)\", \"The experimental baselines are questionable, uninteresting, and applying the same sort of fully connected architecture to only two tasks, one of which has not been widely studied with fully-connected networks and thus hard to judge. Especially in the first layer it seems like the efficacy of the low rank predictor may depend heavily on the input distribution.\"], \"detailed_comments\": [\"Sec 2.1:\", \"The speculation that parameter redundancy leads to activation redundancy is very questionable. Consider two filters consisting of oriented, localized edge filters, one appearing exclusively in the upper left quadrant of the receptive field and the other appearing in the lower right, with 3 of 4 quadrants having zero weights in each filter. The two have extremely redundant parameters (only a few bits are needed to describe one given knowledge of the other), but their activations are not redundant in the least.\", \"Sec 2.2:\", \"Both of the datasets on which you run experiments have had extremely competitive (state of the art, in the case of SVHN) performance documented with maxout activations, where activities are completely non-sparse. The importance of representational sparsity is thus far less clear than you seem to suggest.\", \"Sec 3.1:\", \"Notational comment: sigma() is typically reserved for the logistic function, or at least a sigmoidal function such as tanh. Its use to represent the ReLU activation is somewhat jarring.\", \"You can probably assume that a reader of this paper understands how to perform matrix multiplication. The exposition in terms of dot products is unnecessary.\", \"Regarding considerable speed gains, I find this dubious given the degree to which optimized matrix-matrix multiplication libraries (the BLAS, etc.) leverage the properties of today\\u2019s CPU and GPU hardware. In sparse linear algebra applications where sheer memory requirements are not the limiting factor (i.e. making explicitly representing the sparse matrix infeasible), sparsity well in excess of 99% is typically necessary for specialized sparse matrix multiplication routines to beat BLAS on a sparse problem represented densely. While you may be able to claim an asymptotic speedup, it\\u2019s unclear whether this means anything in practice for a wide class of problems of interest. Even if things are sparse enough for there to be a savings, it\\u2019s unclear whether this would be negated by the necessity of performing the low-rank computation as well.\", \"It would be interesting and comforting to see some analysis (theoretical or empirical) of the probability of the low rank approximation making a mistake (and each kind of mistake, as mistakenly flagging a filter response as having a positive value results in no actual error being made in the final analysis, as it will be computed and then presumably thresholded to zero, whereas not computing one that should\\u2019ve been positive will affect the representation passed up to the next layer). If you assume bounded norms of the weight vectors I\\u2019m fairly sure you could say something theoretically as a function of the rank of the approximation.\", \"I\\u2019d also be interested in how the activation estimator\\u2019s efficacy differs for the first layer and subsequent layers, given that the latter have a more sparse input distribution, and whether preprocessing helps or not. Similarly, how does error accumulate through this process when you use it at multiple layers?\", \"Sec 3.4:\", \"multiplying an asymptotic O(...) expression by a constant makes no sense. O() notation is explicitly defined to absorb all multiplicative constants. What you should do instead is either a) make use of a numerical methods textbook pseudocode implementation of SVD to bound the number of theoretical flops, or b) drop the O(...), add a multiplicative constant (which you could bound) and add O(...)\\u2019s for the lower-order terms.\", \"Sec 4.1 & 4.2\", \"Generally, the lack of comparison to the literature for your baselines is troubling. I don\\u2019t know what the state of the art is for a permutation-invariant method (such as a fully connected network) on SVHN, but your nets seem critically underresourced. I know that layer sizes for an MNIST network achieving close to 1% typically features layer sizes at least double those of what you report, and it seems like you are underfitting significantly on SVHN. The whole point of conditional computation is to save computation time in very large models, but you restrict your investigation to extremely impoverished architectures.\", \"Your MNIST results are significantly worse with what is the permutation-invariant state of the art standard nowadays and there\\u2019s really no excuse for it, an error of 1.05-1.06% is easily reproducible. At the very least it would frame the error rates in terms of results with which people are familiar.\", \"The architectures of the activation estimators seem arbitrary and it is not clear how they were chosen, or why a wider variety were not explored.\", \"In addition to the error rates for various activation estimator architectures, you should make clear the theoretical speedup provided by this architecture, and the measured empirical speedup.\"], \"section_5\": [\"The extension to the convolutional setting is not obvious to me at all. Sure, you can write out the convolution as a matrix multiply with a structured sparse matrix but the SVD of that is going to be dense and gigantic.\", \"There\\u2019s no mention of the possibility of learning the low-rank approximation, which seems like the natural thing to do, especially as concerns test-time speed-up rather than train-time speedup. I also consider it critical to compare against the case where there is no full rank weight matrix, just this low rank version being used as the actual weight matrix, to show that maintaining and carefully multiplying by the full rank weight matrix actually is in some respect a necessary ingredient. If you can get away with simply learning factored weight matrices (i.e. introducing bottleneck linear layers) then this scheme is needlessly complicated.\"]}", "{\"title\": \"review of Low-Rank Approximations for Conditional Feedforward Computation in Deep Neural Networks\", \"review\": \"Summary of contributions: Proposes a specific means to achieve Bengio's goal of 'conditional computation'. The specific mechanism is to use the sign of approximate activations to predict which activations are zero, so that the full-rank values of rectified linear units don't need to be computed for units that are probably 0.\", \"novelty\": \"moderate\", \"quality\": \"low\", \"pros\": \"-Demonstrates that the sign approximation method slightly improves test error.\", \"cons\": \"-The main purpose of the work is to reduce computational cost, but computational cost is never quantified\\n-It's not obvious that predicting which units are zero will lead to the ability to reduce computation cost\\n-If computational cost can in fact be reduced, it's not clear that the cost of the predictor is less than the cost that its use could remove\\n-The baselines that are improved upon are not competitive\\n-It's not clear that the hyper parameters were chosen in a way that makes the comparisons fair\\n-Some of the commentary is misleading or questionable\", \"detailed_comments\": \"pg 1\\nI don't think you can necessarily conclude that the computational requirements to train a large net decreased between the publication of [12] and the publication of [4]. The purpose of [12] was largely to demonstrate how many machines their system could use. At the ICML poster session where [12] was first presented, I heard someone else ask Quoc Le if the number of machines was actually necessary to get the results they had, and he said the high number of machines was not necessary, they just wanted to claim credit for networking that many machines to train a neural net. It's likely that most of the savings in [4] relative to [12] are just due to the fact that most of the computation in [12] was wasted.\\n\\npg 2\\n\\nSection 2.1\\n\\nI don't think you can conclude that redundancy of parameters implies redundancy of activations. For example, two units with the same weight vector have very redundant parameters, but changing just their bias unit can give them very different activations. Presumably nets would not benefit much from increased size if the units were truly redundant.\\n\\nSimply being able to predict which units in a net are going to have zero activation doesn't necessarily mean that you will get computational savings. You need the cost of predicting which units are zero to be cheap enough that on average running both the predictor net and the predicted subset of the main net is cheaper than running just the main net. You also need some mechanism for cheaply running subsets of the main net. Doing sparse matrix operations with an arbitrary, dynamically updated sparsity mask is not significantly more efficient than just doing dense operations, to my knowledge.\\n\\nI think you're missing the fundamental idea of conditional computation, at least as described in [2] and [3]. Bengio is not advocating figuring out cheaper ways of computing exactly the same function we already compute. He's advocating figuring out how to compute only some of the features that are most relevant to processing a particular input. What you're doing is more of a software engineering optimization where you figure out which features are zero. Bengio is advocating something more ambitious, figuring out which features are unnecessary even if they are nonzero. Another issue with your approach is that you're trying to individually predict which features are nonzero. This means you need to be able to have the predictor make an explicit decision for each of them individually, so your runtime is going to have asymptotic cost which is at least linear in the number of units. Bengio is advocating making predictions about entire groups of units at the same time. If done right, this has the chance of having cost only logarithmic in the number of features in the model, or, to phrase it more excitingly, being able to run a model that has exponentially more features than runtime and memory requirements.\\n\\nSection 2.2\\nI'm not sure sparsity of the representation is the correct explanation for the performance of the rectifier nets in [7]. The sparse rectifier nets in [7] have sparse activations, but they have equally sparse gradients. i.e., the gradient through a unit is zero for an example if and only if the activation is zero for that unit. This paper ( http://arxiv.org/pdf/1302.4389v4.pdf ) shows that similar nets can perform well if they have sparse gradients but non-sparse activations. See especially Fig 2. of that paper. So sparse gradients may be more important than sparse activations.\\n\\n\\npg 3\\n\\nYour presentation is confusing because you refer the reader to fig 3.1 before defining S, but S is used in the caption of this figure.\\n\\nUsually one does not use italic letters when denoting the 'sgn' function, since italic letters are used for algebraic variables.\\n\\npg 4\\n\\nSection 3.3\\n\\nI don't understand the paragraph on dropout and sparsity. The most problematic sentence is 'During training, the sparsity of the network is likely less than p for each minibatch.' Does 'more sparsity' mean fewer active units? So less sparsity means more active units? If we set p to zero, this implies that every unit is likely active. But that is the default SGD training algorithm, in which we certainly know that not all the units are active. Also, are you counting units zeroed by dropout as contributing to the sparsity? What about units that are zeroed by their input being negative? Why should their be any straightforward and simple relation between the dropout p and the number of units zeroed by their inputs being negative?\\n\\n\\nSection 3.4\\n\\n1 is probably a pretty high value for the initial biases. Was dropout applied to the input as well as the hidden layers? If so, p = 0.5 is probably too high for use on the input.\\n\\n\\npg 5\\n\\nTable 4.1: Where did these hyper parameters come from? Why should the same hyper parameters be used for all conditions? How can we know that your 'control' curve in Fig 4.1 is a good baseline to improve over? If you hand-tuned your hyper parameters to work well with the S predictor net active, then it's not very surprising the control would perform worse. I'm not saying you intentionally did this, but without a description of your methodology for obtaining the hyperparameters, it seems likely that you implicitly did this while trying to get your new method to work.\\n\\nIt seems like your experiments are bypassing the most important part of the paper, which is to demonstrate a speedup. I don't see anything about timing experiments here. I suspect you can't actually get a speedup by this approach due to the difficulty of leveraging a speedup from sparsity if the sparsity pattern is dynamic rather than fixed, also due to the difficulty of leveraging sparsity on GPU if there is not a regular structure to the sparsity pattern. Moreover, as far as I can tell, you don't quantify the cost of frequently running SVD during training.\\n\\nAre your nets actually large enough to be interesting? The bigger of the two has about three million parameters and doesn't seem to be able to fit the SVHN dataset all that well. Conditional computation is mostly interesting for its ability to push the frontier of network capacity, so I'm not sure the scenario you've experimented on is really interesting from the perspective of evaluating a conditional computation method.\\n\\npg 6\\n\\nSimilar criticism for MNIST, except here I'm confident that your baseline control system is not good. With dropout you should be able to get down to < 1% test error.\"}", "{\"reply\": \"I don't think the revised paper really addresses my main concern, which is that it's extremely difficult to get a speedup in practice from the proposed method. I think you'll have quite a lot of difficulty doing so with cuda-convnet, since GPUs generally do not handle applications with a lot of branching well.\\n\\nWhile I understand that you're trying to see how much your method degrades neural net performance rather than trying to get a new state of the art result, I think it is important to work with state of the art methods and large models. When starting with a small or underperforming baseline, you might obtain a result showing that your method doesn't harm performance, merely because the base performance was poor and easily captured by a simpler model family.\\n\\nThe paper by Leonard et al is quite a bit different than what you are doing. You are trying to learn which values are zero after using a standard training procedure. Leonard et al are training a net to intentionally *make more of the activations zero*. Their procedure explicitly groups the zeros into contiguous blocks, so that the indexing logic required to ignore the zeros is cheap. Also, the secondary network used to determine which blocks should be computed is significantly smaller than the primary classification network. In your case, the secondary network has the same number of outputs as the primary network has hidden units. Despite all of these advantages over your approach, Leonard et al still didn't demonstrate a speedup, so I think your approach will really be fighting an uphill battle to get one.\\n\\nGetting 1.05% or below on MNIST is not hard with dropout. Grad students from both Nando de Freitas and Yoshua Bengio's lab have replicated that result without any trouble.\"}", "{\"reply\": \"6a1b,\\n\\nThanks for the reply -- always good to have constructive feedback.\\n\\n> I don't think the revised paper really addresses my main concern, which is that it's extremely difficult to get a speedup in practice from the proposed method. I think you'll have quite a lot of difficulty doing so with cuda-convnet, since GPUs generally do not handle applications with a lot of branching well.\\n\\nYou are correct -- for generic matrix multiplication, obtaining a speedup will be difficult. Even for matrix multiplications with highly structured outputs (eg., multiplying an upper-triangular matrix by another upper-triangular matrix will yield another upper-tri matrix, so we don't need to calculate the lower triangular entries), the formulation is difficult and doesn't quite perform at 2x (in the case of tri * tri) even when implemented efficiently.\\n\\nThe bulk of the computation for CNNs occur in the first few layers. When plugging in values for the layer-wise sparsity and low-rank approximations, full-sparsity on the output layers for an Imagenet-sized network hardly impacts the relative change in FLOPs. Therefore, we are focusing mainly on implementing the conditional computation for CNNs at this point.\\n\\nWhile it is true that GPUs are not generally efficient when following arbitrary branches for different threads (warp divergence), NVIDIA GPUs can happily follow branches, as long as every thread in the warp follows the branch. We are working on a CUDA kernel that works on 8x8 filters (64 pixels), so when we skip a step in the convolution for a particular image and particular filter, precisely two warps follow the same execution path. There is some overhead in formulating the convolution in this way (we have to use more local memory and we have to do reductions, which wastes threads as we proceed through the reduction tree), but the path ahead looks promising. We plan to extend this idea to 4x4 filters and 12x12 filters, where we can skip pairs of 4x4 filters or 12x12 filters at a time. (ie., two 4x4 filters, or 32 pixels, are calculated together in one warp. If we have to calculate one, then we calculate both. If we do not have to calculate either, skip the computation. There is a similar scheme for 12x12, but we only 'waste' 16 pixels worth of calculation if we have to calcuate the step in the convolution for either filter, because the 'boundary' between the two filters spills over a warp).\\n\\n> While I understand that you're trying to see how much your method degrades neural net performance rather than trying to get a new state of the art result, I think it is important to work with state of the art methods and large models. When starting with a small or underperforming baseline, you might obtain a result showing that your method doesn't harm performance, merely because the base performance was poor and easily captured by a simpler model family.\\n\\nThis is an excellent point. We look forward to the results when applying this approach to larger networks.\\n\\n> The paper by Leonard et al is quite a bit different than what you are doing. You are trying to learn which values are zero after using a standard training procedure.\\n\\nNot necessarily -- in this paper, the conditional computation is applied during the entirity of training.\\n\\n> Leonard et al are training a net to intentionally *make more of the activations zero*. Their procedure explicitly groups the zeros into contiguous blocks, so that the indexing logic required to ignore the zeros is cheap.\\n\\nCuriously, I'm not seeing anything in this reference talking about encouraging the zeros of the hidden activation to be contiguous. Perhaps you have another reference that I am not aware of?\\n\\n> Also, the secondary network used to determine which blocks should be computed is significantly smaller than the primary classification network. In your case, the secondary network has the same number of outputs as the primary network has hidden units. Despite all of these advantages over your approach, Leonard et al still didn't demonstrate a speedup, so I think your approach will really be fighting an uphill battle to get one.\\n\\n'Experiments are performed with a gater of 400 hidden units and 2000 output units. The main path also has 2000 hidden units. A sparsity constraint is imposed on the 2000 gater output units such that each is non-zero 10% of the time, on average.' I see that the gater and the hidden units are always of the same size. Are we looking at the same paper?\\n\\n> Getting 1.05% or below on MNIST is not hard with dropout. Grad students from both Nando de Freitas and Yoshua Bengio's lab have replicated that result without any trouble.\\n\\nBy adding noise to the input layer (setting pixels to zero with p=0.2), I've been able to bring MNIST down to ~1%. Initial results show similar behavior when perturbed with an increasingly harsh bottleneck on the gating layer. I would like to revise the paper with these results, but I am not sure if we are past the deadline for resubmission. \\n\\nI've gotten SVHN down to ~7% by adding noise to the input, but I suspect the results will always be very far from state-of-the-art with a permutation invariant network.\"}", "{\"review\": \"-Demonstrates that the sign approximation method slightly improves test error.\\nI'm not sure where you're seeing this -- on the test set, the sign approximation always increased the test error (as a function of how poor the approximation was). One should expect the method to make test error slightly worse as a trade-off for faster computation. This approach does not try to improve generalization -- it is forming the basis of a method to trade-off between speed and accuracy.\\n\\n-The main purpose of the work...\\n-It's not obvious...\\n-If computational cost can in fact be reduced...\\nThese three points have been addressed in a new revision of the manuscript. There are many parameters that determine the theoretical speedup (size of each layer, how approximately the weight matrix can be calculated, how sparse the activations are), so the answer (that is, the overall speedup) depends on these factors.\\n\\n-The baselines that are improved upon are not competitive\\nThe baselines are not intended to be competitive to state-of-the-art methods. The emphasis is on investigating how the activation estimation degrades the training of the neural network.\\n\\n-It's not clear that the hyper parameters were chosen in a way that makes the comparisons fair\\nWe've added a description of how the hyperparameters were chosen to the manuscript.\\n\\n-I don't think you can necessarily conclude...\\nInteresting, I did not know this -- I was going off of the introduction in Coates et al's paper.\\n\\n-I don't think you can conclude...\\nI disagree. If the biases are part of the weight matrix (an okay thing to do), then we observe the following:\\n\\na in R^{N \\times d}, W in R^{d \\times h}, U in R^{d \\times k}, V in R^{k \\times h}:\\n\\nUV = hat{W}\", \"assuming\": \"rank(hat{W}) = k\\nrank(a) = d\", \"then\": \"rank(ahat{W}) le min(rank(a), rank(hat{W})) = min(d, k)\\n\\nBecause k < d (otherwise, we are wasting time with the conditional computation, because the activation estimator would take longer than the regular feed-forward), the rank of the resulting activations can be fairly well represented by a linear projection of lower-rank, indicating redundancy in the same sense that the weights are redundant. Redundant does not necessarily mean useless -- the extra dimensionality gives us some more 'wiggle-room' for discrimination. The key observation is that we may be able to 'get away with' a very low-rank representation of 'W' if we are only interested in recovering the sign of 'a'.\\n\\n-Simply being able to predict...\\nAgreed -- the activation estimator is designed to be efficient, but it is certainly possible to be slower overall if a bad choice is made for the rank, or if the activations are not sufficiently sparse. There are likely smarter and faster ways of doing the conditional computation that have not been investigated. We've added a section describing the number of FLOPS required for a regular neural net and a neural net augmented by this conditional computation scheme.\\n\\n-You also need some mechanism...\\nI agree. This part will require the most effort to implement well. It is hard to beat off-the-shelf BLAS libraries, but the proposed matrix-multiplication would not be a sparse-sparse (ie., memory-bound) operation. It would be dense-dense, with some additional control structure to allow certain dot-products to be skipped. This will surely add some overhead, and it is hard to speculate exactly how much. Using the equations in Section 3.4 (the new section on the FLOPS required), there could be as much as a 5-20x speedup in feed-forward for an Imagenet-size network, depending on sparsity and rank. As such, we feel it is worth investigating this special matrix multiplication.\\n\\n-I think you're missing the fundamental idea...\\nWe disagree. Bengio, Leonard, and Courville put a preprint on arXiv (http://arxiv.org/abs/1308.3432, citation [3]) using a similar scheme, but differing in execution -- an auxiliary network off to the side that decides which neurons need to be calculated, which is trained by backpropagation (with a sparsity constraint). This approach is still at least linear in the number of hidden neurons. Implementing a network that is exponential in the number of features will likely take an incremental effort, and we feel that this approach is a good starting point.\\n\\n-I'm not sure sparsity of the representation...\\nThanks for pointing this out -- indeed, the activations of maxout networks are dense, making the statement in the paper about sparsity too strong. We only meant to say that sparse representations work well in a lot of cases, so we should take advantage of them.\\n\\n-Your presentation is confusing...\\nThank you for pointing this out, I have fixed this.\\n\\n-I don't understand the paragraph on dropout...\\nI have revised the wording here, apologies for the confusion. There is no obvious connection to the dropout rate and the eventual sparsity of the network, in the sense of 'given a dropout rate p, the sparsity of the activations of the geometric-mean network will be q'. It is simply a side-effect that a neural net trained with dropout tends to be more sparse than a network that is trained without.\\n\\n-1 is probably a pretty high value for the initial biases.\\nWe used a suggestion in the Imagenet paper -- set the biases to a high value so the neurons are mostly active for the initial part of training. We had good initial results with it, so we did not explore other values when it came to tuning hyperparameters.\\n\\n-Was dropout applied to the input...\\nWe were not using dropout on the input layer (is this commonly done outside of the context of denoising autoencoders? I'd be interested in seeing a reference for this.) Thanks for pointing this out, I've clarified this in the new manuscript.\\n\\n-Where did these hyper parameters come from?...\\nI've updated the manuscript to detail the hyperparameter selection (done in the usual way -- split the training set into a training and a validation set, try several sets of hyperparameters, select the set of hyperparameters that yields the best validation set.) The hyperparameters were chosen using the baseline network and were not used to guide the performance of the predictor network. You may be misunderstanding the point of this work, which is to investigate the degradation of performance as the activation estimator becomes more approximate -- we are not trying to improve the performance of neural nets with the activation estimator. The interesting part is how approximate we can make it before performance suffers noticeably.\\n\\n-...your experiments are bypassing the most important part of the paper, which is to demonstrate a speedup...\\nThis is true, as the conditional matrix multiplication has not yet been implemented in an efficient way. The implementation of the conditional matrix multiplication will certainly have some overhead, but we do not expect it to be as great as the overhead seen in sparse-sparse operations. There are two reasons for this:\\n1) We are still ultimately doing a dense-dense matrix multiplication, so we don't need to spend any time creating sparse matrices.\\n2) We will still be able to take advantage of things like locality, caching, vector operations, (all of which are mostly off the plate with sparse operations) so we expect the operation to still be mostly compute-bound (as opposed to sparse operations which are memory-bound). We are beginning to work on a CPU as well as a GPU implementation.\\nA conditional matrix multiplication could also allow us to only calculate the backpropagation through the required (ie., non-zero) neurons, which could lead to similar speed gains in the backpropagation phase.\\n\\n-Moreover, as far as I can tell, you don't quantify the cost of frequently running SVD during training.\\nThe cost of the SVD is fairly insignificant because it is evaluated so infrequently (without impacting the accuracy of the estimator too greatly). On my machine, evaluating an SVD takes ~0.30s for a 1000x784 matrix. Each epoch for the SVHN net took ~300s. We've found some interesting ways to speed up successive SVD calculations with warm start methods (faster factorization given a closer starting point to the final factorization), but we have not experimented with them yet. Admittedly, the SVD is a pretty heavy operation, and we would like to move away from it, especially considering that there are many properties of the SVD that aren't crucial for this approach to work.\\n\\n-...The bigger of the two has about three million parameters and doesn't seem to be able to fit the SVHN dataset all that well...\\nThe codebase was confined to using a fully-connected net, so it is not surprising that the SVHN results were far from state-of-the-art. The results are comparable to the stacked auto-encoder results reported in 'Reading Digits in Natural Images with Unsupervised Feature Learning' (Netzer, et al).\\n\\n-Are your nets actually large enough...\\nGood question. deeplearntoolbox turned out to be a slightly inappropriate codebase (a very slow convnet implementation, no GPU support) for the purposes of this work, and I am currently modifying cuda-convnet to work with conditional computation. We look forward to applying this method to large convolutional networks trained on more difficult datasets, such as Imagenet.\\n\\n-...I'm confident that your baseline control system is not good...\\nIn Nitish Srivastava's MS thesis, he reports 1.25% for a fully-connected relu network with dropout, and 1.05% for a fully-connected relu network with dropout and a max norm constraint. It is very difficult to get less than 1% on the permutation-invariant task without resorting to things like elastic deformations. While the reported ~1.4% in this work is far from these results, I believe the 0.4% difference in performance can be attributed to a less-than-optimal hyperparameter search, or perhaps some subtle difference in implementation, eg., learning rate scheduling, momentum scheduling, weight initialization scheme, etc. Because the purpose of this work is to investigate the effects of the approximation on the performance of the network, less time was spent on finding hyperparameters that would yield state-of-the-art performance on MNIST.\"}" ] }
vz8AumxkAfz5U
Revisiting Natural Gradient for Deep Networks
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is three-fold. First we show that Hessian-Free (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of natural gradient descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive natural gradient from basic principles, contrasting the difference between two versions of the algorithm found in the neural network literature, as well as highlighting a few differences between natural gradient and typical second order methods. Lastly we show empirically that natural gradient can be robust to overfitting and particularly it can be robust to the order in which the training data is presented to the model.
[ "natural gradient", "deep networks", "robust", "aim", "first", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
submitted, no decision
https://openreview.net/pdf?id=vz8AumxkAfz5U
https://openreview.net/forum?id=vz8AumxkAfz5U
ICLR.cc/2014/conference
2014
{ "note_id": [ "hTJQT_z-3SR-I", "TfmU-xyHU5T9k", "OZaGO59P2dOPM", "kj-njtQse09Fg", "wKQ2KPGos6Xxb", "GoROGWWS59GLu", "eEvhY_ufacEkA", "3NcexfRXwZ3XL", "8-oWcSCRpb8Za" ], "note_type": [ "review", "comment", "comment", "comment", "review", "review", "review", "review", "review" ], "note_created": [ 1392357120000, 1392751440000, 1392751380000, 1392751560000, 1390039800000, 1390330860000, 1392242400000, 1390504860000, 1390441380000 ], "note_signatures": [ [ "anonymous reviewer a88c" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "Razvan Pascanu" ], [ "Daniel Povey" ], [ "Razvan Pascanu" ], [ "anonymous reviewer 8445" ], [ "Razvan Pascanu" ], [ "anonymous reviewer 98fe" ] ], "structured_content_str": [ "{\"title\": \"review of Revisiting Natural Gradient for Deep Networks\", \"review\": \"This paper looks at some connections between the natural gradient and updates based on the Gauss-Newton matrix which have appeared in various recent optimization algorithms for deep networks. Some of these connections appear to be already mapped out, but the paper presents a unified picture and fills in additional details. There are some interesting insights about how this interpretation can allow unlabelled data to be used in truncated Newton style methods like Hessian-free optimization (HF).\\n\\nLater in the paper connections between Krylov Subspace Descent (KSD) and natural conjugate gradient algorithms, and a new algorithm is proposed which tries to combine the supposed advantages of several of these approaches. This algorithm, which is a potentially interesting twist on HF that involves taking the previous update and the new proposed update (which will be an approximation of the natural gradient) and jointly optimizing over both of these. However, I'm not convinced these is a better thing to do than just initialize CG from using previous iterations. While the experiments did demonstrate a distinct advantage to doing it this way in the setting involving no preconditioning, I remain skeptical. \\n\\nOverall I think this is a solid paper, and should be accepted. However, I really want the authors to address my various concerns before I can strongly recommend it.\", \"detailed_comments\": \"\", \"page_2\": \"What do you mean by 'the change induced in our model is constant'? Natural gradient descent type algorithms can use dynamically changing step sizes. Did you mean to say, 'some given value'?\\n\\nMoreover, this statement doesn't distinguish natural gradient descent from gradient descent. Gradient descent can also be thought of as (approximately) minimizing L w.r.t. some metric of change. It may happen to be a parametrization-dependent metric, but the statement is still valid.\\n\\nMoreover, eqn. 2 isn't even what happens with algorithms that use the natural gradient. They don't minimize L w.r.t. to some target change in KL, since this is typically intractable. Instead, they use an approximation. I know later on you talk about how this approximation is realized, but the point is that algorithms that use the natural gradient are not *defined* in terms of eqn. 2. They are *defined* in terms of the iteration which can be interpreted as an approximation of this.\", \"page_4\": \"I would suggest you call it 'Hessian-free optimization'. It sounds weird to call it just 'Hessian-free', despite the lack of O in the commonly used acronym.\", \"page_6\": \"I'm a bit confused about what is going on in section 6. In particular, what is being said that is particular to the Fisher information matrix interpretation of G? Any relationship of KSD to kind of non-linear CG type algorithm seems to be a completely different issue (and one which is not really explored in this section anyway).\", \"page_7\": \"You talk about using the same data to estimate the gradient and curvature/metric matrix introducing a bias.\\n\\nHowever, this is going to be true for *any* choice of partial data for these computation. In particular, your suggestion of taking independent samples for both will NOT make the estimate unbiased. This is because the inverse of an unbiased estimator is not unbiased estimator of the inverse, and you need to take inverses of the matrix. Thus this doesn't address the biasedness problem, it just gives a different biased result.\\n\\nMoreover, there are intuitive and technical reasons not to use different data for these estimates. See the article 'Training Deep and Recurrent Neural Networks with Hessian-Free Optimization', section 12.1.\\n\\nDespite the potential issues for optimization, I think it is nonetheless an interesting observation that unlabelled data can be used to compute G in situations where there is a lot more of it than labelled data. But keep in mind that, in the end, the optimization is driven by the gradient and not by G, and as long as the objective function is convex, this can't possibly help the overfitting problem. Insofar as it seems to help for non-convex objectives, it is a bit mysterious. It would be good to understand what is going on here, although I suspect that a theoretical account of this phenomenon would be nearly impossible to give due to the complexity of all of the pieces involved.\", \"page_8\": \"Why even run these variance experiments with these different chunks/segments at all? Why not just take multiple runs with some fixed dataset and observe the variance? What does this cross-validation style chunking actually add to the experiment?\", \"figure_2\": \"In figure 8, what is being plotted on the x-axis? Is that the segment index? Why should this quantity be monotonic in the index, which seems completely arbitrary due to the presumably random assignment of cases to segments?\", \"page_9\": \"Typo: 'Eucladian'\", \"page_10\": \"If the baseline NGD is basically just HF, then it seems to be quite severely underperforming here. In particular, an error of about 0.8 after about 10^4.5 s = 9 hours is far far too slow for that dataset given that you are using a GPU. Even if measured in iterations, 316 is way too many. On the other hand, the red curve is in the figures is consistent with what I've seen from the standard HF method.\\n\\nPerhaps this is due to lack of preconditioning. However, why not use this in the experiments? This is very easy to implement, and would make your results more directly comparable to previous ones.\\n\\nI suppose the fact that NCG-L is performing as well as it is without preconditioning is impressive, but something doesn't sit right with me about this. I would be really interested to hear what happens if you incorporate preconditioning into your scheme.\"}", "{\"reply\": \"Thank you for your comments. We have carefully looked at each one and merged them into the new version of the paper. The new version of the paper (version 7) is online. Allow us to answer your concerns.\\n\\nWith respect to the weak experimental section, we understand the reviewer's concern. Indeed we tried not to make big claims about the new proposed algorithm (NCG) or even to suggest that it is better then a properly implemented and fine tuned Hessian Free optimization implementation. \\n\\nWe regard our work as a discussion on the theme of natural gradient. We believe that we provide a new interesting perspective on some optimization algorithms that were recently proposed for deep learning. We believe that understanding HF or KSD as natural gradient algorithms can be useful, as it offers new ways of extending or analysing these algorithms. Specifically, e.g., natural gradient is a first order method (wrt to the function that we try to minimize), meaning that we can now think of how to also incorporate second order information into the algorithm. This is something that has been done successfully for TONGA. Our proposed algorithm is in the paper as an example of how this can be done. Specifically, the role of section 9 is to justify that our theoretical analysis is not useless because, for example, we can do things such as this algorithm. We do not claim that our way of introducing second order information in natural gradient is optimal, or that there are no issues with the algorithm we proposed. We are actually intending to explore alternative ways of introducing second order information to natural gradient, and believe that our proposed algorithm might be quite inefficient at doing so. However, as it stands, it does provide proof that our theoretical analysis is not useless.\\n\\nWe believe that also disentangling TONGA from Amari's original natural gradient is also a useful contribution. While in general these two algorithms are not probably confused with each other, we have seen instances (within the deep learning literature) where they are. For this reason we opted, e.g., to use the name TONGA for the algorithm proposed by Le Roux and Bengio (even though originally it was called also natural gradient).\\n\\nLastly we provide some hypothesis of additional properties (beyond the list given in section 3) which we think make natural gradient a very promising learning algorithm in general. We have (as you pointed out) validated these hypothesis on a single dataset. However, even only as hypothesis, we think such observations are interesting. There is also evidence in the literature pointing to the validity of some of our hypotheses. For example, in the KSD paper it is mentioned that using a different minibatch for computing the gradient from the one used for computing the metric or running BFGS is useful. The paper however does not uses the same explanation for this phenomena as the one we are providing.\\n\\nWe believe that, as it stands, given all the notational fixes and details proposed by the reviewers, our work can be useful to provide a few starting points for further work investigating such algorithms, by giving a new perspective and trying to connect many papers in the literature that do not seem particularly aware of each other.\", \"some_more_detailed_replies_of_your_comments_are_as_follows\": \"* Regarding the paper by Mizotani and Demmel. Thank you for the citation, we added it to our reference list. We were, unfortunately, not aware of this work. \\n\\nIndeed our paper is deep learning centric. We do try to cite, as much as we are aware of, the work done on this subject in other fields. Thanks for helping us providing a better coverage of the literature.\\n\\nThis is one reason we opt to add Deep Learning in the title, such that the work is not confused with a generic review of natural gradient (or related algorithms). We are aware that these algorithms have a long history and they are being successfully applied in different subfields like, for example, natural gradient in reinforcement learning. We have tried to do a literature review as good as we could, and in our opinion the paper does a decent (though probably not perfect) job at citing work that is not specific to deep learning. \\n\\nThe paper we cite is of particular interest to us because of the way the Krylov subspace is altered. We try to explore this observation to provide an approach that incorporates second order information alongside the natural gradient (which relies on the manifold curvature).\\n\\n* Regarding the reason of why using the same examples for the metric (and gradient) could hurt learning (when the minibatch is relatively small) we haven extended our comments to better emphasis our understanding of this phenomena. \\n\\nOur intuition is that using the same training examples for both the Fisher matrix and gradient (when the batch is not large enough to perfectly match the statistics of the dataset) can result in overfitting the current minibatch. Think, for example, of some particular deformation of most of the examples in a minibatch that appears correlated with the class that one has to predict (though on a global scale this specific deformation is not predictive of the output classes). The gradient on the minibatch will point towards also learning this feature. Normally SGD is not affected by such artifacts because one takes only a small step in the direction of the gradient and, in time, this step is cancelled by updates from other minibatches. However natural gradient can take large steps, because it rescales the gradient according to the metric. Specifically, if, for those examples that we have, the metric agrees that moving in this direction results in a steady change of the probability density function, it will take a large step. This means the model will waste capacity learning this particular feature that will cause it to make mistakes on subsequent training examples. \\n\\n* Regarding Figure 1. We are currently in the process of re-running these experiments. We agree that the choice of minibatch size seems arbitrarily and confusing. The reason was that, when using unlabeled data, we wanted to reduce the noise introduced by unlabeled examples (which were inherently noisier then the training examples) by increasing the minibatch size. We kept this larger minibatch size also when using different labeled examples.\\n\\n* We fixed the label of Figure 1 (top/bottom was replaced with left/right)\\n\\nRegarding Figure 2, we extended our explanation of the results and hypothesis we want to show. We also switched to log scale. \\n\\nThe goal of our experiment is not to show that natural gradient has lower relative variance of early examples compared to later examples. The goal of the experiment is to show overall lower variance. That means that, regardless of the order of inputs, natural gradient tends to behave more consistently, while one might get very different models when using SGD.\"}", "{\"reply\": [\"Thank you again for all the comments. We have been through all of them and incorporated them into the new version of the article. The article is online (ver 7). We believe that the text is clearer now and meets ICLR quality standards.\", \"Let us go again through your points and address them.\", \"We corrected the phrase 'density probability function'\", \"Regarding p_\\theta(z) and what it means for DNNs. The concept of natural gradient is not necessarily tied to DNNs. In section 2 we introduce the generic idea, both in the generative (unconditional) case and the conditional case (which includes DNNs). Compared to the standard approach of explaining natural gradient we try to avoid relying on the manifold description, but rather present the algorithm in a different light. We use a constraint optimization at each step to find the right descent direction. While we believe this explanation is quite useful and a good contribution of the paper, it is not really novel. [1] briefly talks about it and [2] as well. We also described this idea in [3]. Section 2.1 relies specifically on DNNs.\", \"[1] Amari, S., 'Natural gradient works efficiently in learning', Neural Computations, 1998\", \"[2] Heskes, T. 'On natural learning and pruning in multilayered perceptrons', Neural Computation 2000\", \"[3] Desjardins, G., Pascanu, R., Courville, A. and Bengio, Y 'Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines', ICLR 2013}\", \"We have included the definition of the Fisher Information Matrix (equation 1 in the new version of the paper)\", \"We have extended the first paragraph of section 2.1 into 2 paragraphs detailing what we mean by the probabilistic interpretation of a deep model. We have also pointed to chapters in Bishop were similar interpretations are introduced.\", \"We corrected the phrase 'computed by model'\", \"We have added a description of the Levenberg-Marquardt heuristic that we mention in Equation 15, now Equation 16. The heuristic comes from a trust region approach where we check how much we trust our approximation of the function and decide how much we need to damp the metric or Hessian.\", \"Corrected the typos 'betweent', 'Eucladian', 'batch model', 'Riebiere'\", \"Regarding how expensive it is to invert the metric (versus the other computations involved in a natural gradient descent step). We have not actually timed separately how long it takes to invert the matrix versus everything else. However, with no doubt, most of the time is spent in inverting the matrix. From empirical observations, for natural gradient descent, inverting the matrix can be ( substantially so) more than 90% of the time spent in each step. Regarding the number of steps, this varies from iteration to iteration, with a maximum of 250 steps. MinRes runs as long as the error we have at solving the linear system is larger then some value which is part of our hyper-parameters (set to 5e-4 in all our experiments). We clarified these points in the text.\", \"Corrected 'be obtain' as well as the notation sum_i^o and the phrase 'certain extend'\", \"We made a note on the abuse of notation in Eq.27 (now eq. 28). Indeed\", \"rac{1}{mathbf{y}} refers to applying the division element-wise. We also expressed the metric for the softmax activation in the same form as the other, namely J [something] J^T. We edited equations 28 and 29.\", \"As in the main paper, we have extended in the appendix as well what is the Levenberg-Marquardt heuristic that we try to mimic.\"]}", "{\"reply\": [\"Thank you for your detailed comments. We went through all of them and changed the paper accordingly. The new version of the paper (version 7) is online. Let us answer your comments and show how we addressed your concerns.\", \"Regarding the notation on page 2, namely the use of R^P \\to (R^N \\to [0, inf)) and p_\\theta(z) = F(\\theta). We have explicitly spelled out the family F (instead of just providing its domain and codomain). The notation we provided is typical for higher order functions (for example when currying a function http://en.wikipedia.org/wiki/Currying). We agree that it is an uncommon notation (or an abuse of notation) and decided to remove it as suggested. The notation $p_\\theta(z) = F(\\theta)$ was fixed to the suggested form. Yes that is what we meant.\", \"Regarding how a matrix-valued function of theta defines a metric on the space, we added a few sentences about it. However, we want to emphasize that we only briefly mention the manifold interpretation of natural gradient to make the link with Amari's work. We believe that the definition of the algorithm given in Eq. 2 (now Eq. 3) is actually more intuitive for the target audience of this work, and this is why we chose this way of explaining it.\", \"Regarding our choice of using row vectors for gradients. There is no globally accepted convention with regard to the representation of gradient, either as column vectors or row vectors. A bit of search revealed that is not even a cultural distinction. http://www.cs.huji.ac.il/~csip/tirgul3_derivatives.pdf defines gradient as being a row vectors. In the computer graphics community the gradient is used as a row vector as well, e.g. http://chrishecker.com/Column_vs_row_vectors. There are many other examples. http://en.wikipedia.org/wiki/Matrix_calculus also mentions this issue. I could not find any information stating what is the accepted convention for machine learning (or even statistics). If there is one, and it is in terms of column vectors rather then row vectors we're happy to change the notation. Due to my personal background, row vectors seems a more intuitive choice.\", \"Regarding the use of the term natural gradient to denote the algorithm, we have fixed it replacing it everywhere with natural gradient descent. Similarly Hessian-Free was replaced with Hessian-Free Optimization. Thank you for pointing this out.\", \"We replaced the phrase 'moving in the direction' with 'moving some given distance in the direction' as suggested.\", \"Regarding the phrase 'the change induced in our model is constant'. Yes, we mean to say it is some given value. Specifically, we mean constant with respect to the constrained optimization that we solve for that given step, but it does not have to be globally constant (between iterations of natural gradient). We changed the text to the proposed formulation\", \"We also agree that the sentence is not sufficiently strong. We reformulated to `such that the change induced in our model (in the KL sense)...`. We hope that with this formulation readers will become aware that we refer to the KL constraint in equation 2 (now 3).\", \"Regarding the fact that eqn 2 (now eqn 3) is not an exact depiction of what the algorithm does. We understand your point of view and tried to incorporated in the current text. We wrote 'Specifically we look for $Delta \\theta$ that minimizes a first order Taylor expansion of $mathcal{L}$ when the second order Taylor series of the KL-divergece between $p_{\\theta}$ .. '. We do want to emphasize that viewing equation 2 (now 3) as what natural gradient descent is attempting to do is a valid justification of the algorithm, in the same sense that BFGS, truncated Newton, etc are trying to take a Newton step.\", \"Regarding the term 'empirical Fisher'. You are right that this matrix used by Le Roux et al. is called empirical Fisher matrix. We believe this term to most likely create confusion and we purposely refrained from using it. We did this for the same reason we used the word TONGA to denote the algorithm introduced by Le Roux and others, while the original name of the algorithm was natural gradient. We believe this algorithm to be distinct from the one proposed by Amari and hope that our paper makes this observation clear.\", \"Page 6, eqn 16. You are right, and we change the text to say it approximates the heuristic used by Martens. We also spelled out the underlying assumption for these two heuristics to be the same (namely, as you pointed out, that CG converges). Thank you for making this point.\", \"We have fixed the citation for 'Structural damping'. Thank you.\", \"Regarding the fix coefficient of 1 for structural damping. We meant that by trying to put a constraint on the 2nd order Taylor approximation of the KL wrt the joint $p(t,h |x)$, one gets the structural damping term, with a fixed coefficient of 1. This does not refer to the original paper (and structural damping) but more to a shortcoming of trying to get this regularization in the natural gradient framework via using the KL wrt to the joint. In the next few lines of the text, we provide, however, another approach of constructing structural damping for natural gradient that lets you use any coefficient you want. Specifically we use multiple constraints in the constrained optimization that we solve at each iteration step.\", \"Regarding warm restart (or initialization of CG with the previous solution). We can see your viewpoint. We tried to formulate the paragraph differently to not downplay the meaning of warm restart. Now it does mean something different in the case of KSD as compared to HF's warm restart. Specifically by using the previous direction as an initialization of CG we change the order and the conjugate directions that we visit, but CG solves the same problem (namely inverting the metric matrix). By adding the extra term to the Krylov subspace one can decompose what the algorithm is doing as shown in equation 19 (now 20), meaning that we do something similar to nonlinear conjugate gradient, taking directions locally conjugate to the **Hessian** at that point. The Hessian is different from the metric matrix. We believe that there is a difference between these approaches, which might make KSD take advantage (to some degree) of the curvature of the error.\", \"Regarding section 6 and what it tries to do. Very possibly an abuse of language, but when we refer to KSD we are referring to the particular algorithm (up to implementation details) introduced in Vinyals and Povey. In this sense G is the Fisher information matrix. If we are to interpret the matrix that is being inverted as the Hessian, the section does not make as much sense anymore. What we are hinting to is that because KSD relies on the Fisher matrix, it does not actually rely on second order information of the error (but only on first order gradients). This is true, as the Gauss Newton is mostly an outer product of gradients (with the exception of the partial Hessian in the middle which is usually diagonal and does not carry information about the error). From this perspective one could expect to gain something by also looking at the Hessian of the error. This point was put forward for example by Le Roux et al. In this section we argue that we could see KSD as doing something of this nature, because it could be understood as nonlinear conjugate gradient (which looks at locally conjugate directions with respect to the Hessian), just that instead of using the gradients we use the natural gradients.\", \"We have re-worked this section to make these things apparent.\", \"Regarding page 7, where we talk about a bias introduced by using the same samples for the metric and gradient. We reformulated the text of this section, and avoided using the word bias. What we are hinting towards is that one can limit (to a certain extend) the amount of overfitting of the current minibatch that could happen at each step. This is the reformulation of the paragraph:\", \"Our intuition is that we are seeing the model overfitting, at each step, the current training minibatch. At each step we compute the gradient and the metric on the same example. There can be, within this minibatch, some direction in which we could overfit some specific property of these examples. Because we only look at how the model changes at these points when computing the metric, all gradients will agree with this direction. Specifically, the model believes that moving in this direction will lead to steady change in the KL (the KL looks at the covariance between the gradients from the output layer to the parameters, which will be low in the picked direction) and the model will believe it can take a large step, learning this specific feature. However it is not useful for generalization, nor is it useful for the other training examples (e.g. if the particular deformation is not actually correlated with the classes it needs to predict). This means that on subsequent training examples the model will under-perform, resulting in a worse overall error.\", \"On the other hand, if we use a different minibatch for the gradient and for the metric, it is less likely that the same particular feature to be present in both the set of examples used for the gradient and those used for the metric. So either the gradient will not point in said direction (as the feature is not present in the gradient), or, if the feature is present in the gradient, it may not be present in the examples used for computing the metric. That would lead to a larger variance, and hence the model is less likely to take a large step in this direction.\", \"Regarding section 12.1 of 'Training Deep and Recurrent Neural Networks with Hessian-Free Optimization'.The technical result in section 12.1 makes the underlying assumption that the minibatch we use to estimate these two quantities is large and matches perfectly the statistics of the data. In some sense this is a standard view when doing optimization. That is, optimizing $M(delta)$ to its minimum, based on the current minibatch is the optimum strategy.\", \"We argue that in a stochastic regime (where we think our intuition applies) and specifically in a stochastic regime with moderate sized minibatches, this is far from true. The minimum of $M$ might lead to learning artifacts of the current minibatch that will hurt both generalization and even the global training error.\", \"In particular, if we rephrase it in a similar wording, the gradient can be seen as the linear reward. It might be, that from the point of view of those examples, the quadratic penalty is small for a certain reward, because moving in the direction $d$ seems to be beneficial for minimizing the error on the specific set of examples. However, globally, because the Taylor series is not reliable (in particular because we have not considered sufficient amount of data), the direction $d$ should have a large penalty. Our intuition is that the likelihood of obtaining the same directions $d$ with large linear reward and low quadratic penalty when using different batches of data is lower due to the argument above.\", \"The main argument brought against the different batches of data is that the metric might be singular. This would be the reason why we used to the word moderately sized minibatch. The likelihood of dealing with a singular value (given the maximal number of steps one takes during CG) decreases quite a bit with a moderately sized minibatch. Additionally, the fact that there might exist a direction d such that $g^Td < 0$ and $d^TBd = 0$ can be understood as stating that we do not know how the model changes (in the KL-sense) when we move along d. In practice we address this in two ways. The first one is damping, which is a prior on how much the density probability function changes (in the KL sense) if you move in any direction. The second approach we used is to rely on MinRes which is supposed to be able to handle singular matrices and return the solution with minimal norm that minimizes the system of linear equations. Overall we have not seen the algorithm suffer from this, and similar observations were reported by Vinyals and Povey.\", \"Regarding the observation that unlabeled data can not help with overfitting for a convex problem, that is probably true. Generically speaking, the standard optimization paradigm might be misleading as this is an effect specific to learning (and not to optimization).\", \"We believe that the best way of building intuition of what is going on is to think of the algorithm as trying to move in the parameter space while enforcing a certain amount of change to happen in the behavior of the model (a certain change in the KL divergence of the density probability function). As pointed out in the text, the constraint helps to not get stuck, e.g., near saddle points (i.e. some change has to happen in the behavior of the model), and also forces learning to temper itself. Now the ability of doing this highly depends on how well the model can measure how much it changes. We can use unlabeled data to get a better measure of how fast the density function changes everywhere. While not exactly correct, you could think as how fast the error increases or decreases at points that are different from those in the training set and hence be able to better temper itself.\", \"Because the problem is non-convex, the direction of the gradient can change quite a bit, and not jumping over large parameter regions might have a big impact on the final performance. For example, we might be able (due to this tempering) to follow narrow valleys over which (without unlabeled data) we would otherwise not be able to proceed towards better solutions.\", \"Regarding the question about the pseudocode on page 8. Here are some\"], \"answers\": [\"Where are these 'newly sampled examples' coming from?\", \"The dataset is larger than the total number of examples we need for all the runs (the dataset is obtained by adding distortions and hence has theoretically an infinite amount of samples). So no sample is reused.\", \"What is the purpose of the first split into 'two large chunks'?\", \"If we would have considered the original data and split it directly into 10 we would have not had enough generated examples to carry out the experiment. Specifically, while the original dataset was obtained by adding distortions to the NIST dataset, the original dataset comes with a set of 800M sampled datapoints. We relied on these and did not attempt to generate more data. Because we wanted to look at our model after a sufficient amount of learning, we decided to have the fix second chunk of data that we go through after we reordered the first half. We wanted to simulate online learning for this experiment and not revisit examples during training.\", \"Are the 'heldout examples' from the segment that was 'replaced'?\", \"No. The heldout examples are never used for training in any of the runs, and the same heldout examples are used for all the different runs\", \"By 'output of the trained model' you mean the output of the network or the objective functions scores for different cases?\", \"The output of the network. We believe using the score given by the network to the different classes would not result in a very different graph, though we have not run that experiment.\", \"Note that we've changed the text to make this observation more apparent.\", \"Regarding the x-axis of Figure 2. The x-axis is the index of the segment that was replaced. The curve is monotonic because early examples always have a larger influence on how the model behaves compared to older examples. One could think of the early stages of learning as picking the basin of attraction of the local minima in which the model will end. As such, this part will cause the most variance in the output of the model. The same observation was done by Erhan et al. when running a very similar experiment (as noted in the paper, we used the same protocol, and just increased the dataset size).\", \"Regarding running NGD in batch mode explaining Fig 2. No. Both NGD and SGD used a minibatch of the same size (512 examples). To account for the possibility of the metric being singular, we also used damping for NGD. Since both method processed the same amount of data per step, we believe that the comparison is fair.\", \"Regarding how to choose parameters for the experiment to be fair. We have done roughly what the reviewer is suggesting. By amount of time we used number of steps, and both methods reached roughly the same error of 49.8% after the same amount of steps as indicated in the text and on the plot.\", \"Overall we have rephrased this section of the paper. We state the hypothesis we want to test before explaining the experiment. We also emphasis that we talk about the overall variance, and moved over some of the details from the appendix (like minibatch size for both algorithms).\", \"Regarding just simply running several trials with the same data and use this to measure overall variance. The question we are trying to answer is how does the order of the training examples affect training (not the initialization of the parameters). Specifically we used the same seed for any random number generator through the code, to make sure that we initialize the model in the same way. The only difference between the runs was using different data examples from the same distribution. The purpose of this experiment is to investigate an worrisome observation made earlier by Erhan et al. regarding the importance of the order of the training examples for SGD. Specifically we believe that, by apparently being more robust, NGD may suffer less when run on a large non perfectly homogeneous dataset, by making the question of which examples it needs to see first less important. SGD might be affected more, resulting in models with different behaviors depending to how the data is showed to the model\", \"Regarding NCG being a 2nd order method or not. It is a matter of semantics and we agree with the reviewer that from this perspective nonlinear conjugate gradient might not be a 2nd-order method. However, the premise of nonlinear conjugate gradient is to move along directions that are locally conjugate with respect to the Hessian and in some sense to rely on the curvature information for speeding up convergence. We do rely on this to argue that our modification to NGD, natural conjugate gradient, takes into account, to some extend, the curvature information of the error by moving in directions that are locally conjugate the Hessian of the error. In this sense we do incorporate, to some degree, second order information alongside the KL curvature given by the natural gradient direction. We do not intend to claim that there might not be a more proper way of injecting second error information of the error in the algorithm.\", \"Regarding the wording 'real conjugate direction'. We do a fair amount of language abuse in this section and rectified this in our edits. We mean that even locally the directions might not be conjugate to the curvature of the function if we are to adopt the manifold perspective. This is because, if we consider the Riemannian manifold structure, the previous natural gradient direction and the new one are, in all likelihood, not compatible. That is because the Fisher matrix at step $\\theta_{t-1}$ is different from the Fisher at $\\theta_t$ and so the two vector belong to different spaces. One would have to transport one of the vectors in the space of the other in order to be able to compare the two vectors (and compute the dot product between them).\", \"Regarding gaining some kind of conjugacy to the immediately previous update direction w.t.y. to the current value of G. We are not trying to be conjugate to the metric matrix, but rather to the curvature of the error (when the parameters are assumed to respect the underlying Riemannian structure). In some sense the intuition that you are putting forward is very close to what we are doing.\", \"To make a bit more clear the difference between what we are trying to do and normal nonlinear conjugate gradient we put forward the following observations. Arguably, even if the error is quadratic, unless the metric is constant through out the underlying Riemannian space, the directions we will pick will still not be conjugate to each other globally. That is because we are not looking at the curvature of the error as a function of the parameter $\\theta$, but the error as a function of the density probability function $p_\\theta(t|x)$ and therefore this curvature will not be constant for different values of \\theta. Just think that we identify $\\theta$ with the density probability function that it represents, and now we contracting or extending the R^P space of theta to mimic how the KL between of this functions changes. This means that while the error is quadratic in terms of $\\theta$, it might not be so in terms of the functions $p_\\theta(t|x)$. Of course what we are stating here is a generic argument. Given the specific model we use, and specific form of $p_\\theta(t|x)$, assuming the error is quadratic might imply the metric to be constant everywhere. We are just arguing that in general one could construct examples where this is not true.\", \"About the proof done on page 9 being obvious. We do not think we prove something that is counter-intuitive and therefore in some sense not obvious. We do however think that going through the mechanics of this little sketch of a proof can be helpful. But we agree, a proof can be provided only in terms of these kind of high level arguments.\", \"We fixed the phrase 'we can show' to 'As we will show below'\", \"Fix the Typo 'Eucledian'\", \"We are unsure in what sense the equations on top of page 10 are not consistent.\", \"Fixed the typo 'batch model'\", \"Regarding the hyper-parameters used. Most of the details are in the appendix, because we felt the paper was already much longer than the suggested length. We use a maximum of 250 iterations and stop CG (MinRes actually) when the error is below 5e-4. The damping coefficient was initially set to 3., and adapted based on the Levenberg-Marquardt heuristic (the natural gradient version proposed in eqn 16/17).\", \"Regarding our timings. Indeed our code is slower than Martens' Hessian-Free code. There can be several issues of why this is so. We already pointed out we do not use preconditioning, but this is not the only difference between our code and the one provided by Martens. We rely on MinRes instead of linear CG and we have not altered the stopping criterion on MinRes (as Martens did for CG to not waste computations). We do not backtrack through the steps of Minres in the case when moving in a certain direction results in an increase of the error. Probably there are some other fine-grained differences (for e.g. our heuristic only matches the one used by HF when MinRes would converge) that could impact negatively our code. MinRes itself is, in some sense, more expensive than CG. We would like to finally make a note, that to get 0.8 it actually takes 3.5h not 9h. Recovering these values directly from the graph might be harder due to the logscale. The NGD run was stopped after 5h when it had an error of 0.65. Being that all algorithms share the same pipeline (and MinRes) we believe that our benchmark is in some sense valid.\", \"Regarding not using preconditioning. There is no reason for not using preconditioning. We do intend to run those experiments as well. We also do not exclude NGD to come on top once all these enhancements are added to all algorithms. However we do believe that technically, NCG, by moving on directions that are locally conjugate the Hessian of the error with respect to the functional manifold, has an advantage over NGD. This advantage might be lost by either the approximation that go into the implementation, or might not be helpful for the specific problem we are addressing. However we see our proposed algorithm at a first attempt at integrating some second order information in natural gradient. There are many other choices, and we are currently exploring some of these choices. Specifically, we are investigating what kind of information does such a Hessian matrix carry, information that might not be present in the Gauss-Newton. The answer of such question could help designing the right experiment.\", \"We hope to be able to provide such results soon, within the next couple of weeks. It is an interesting question that we do want to answer. Specifically, as future work, we are interested in understanding what kind of information is present in a Hessian that is not present in the Gauss-Newton (or Fisher) matrix and when is this information useful. We believe there might be certain kinds of plateau in the error surface that do not translate in plateau of the KL-divergence.\"]}", "{\"review\": \"Thanks for noticing our paper!\", \"noticed_a_couple_of_typos\": \"eucladian -> euclidean, model->mode\"}", "{\"review\": \"Thank you for pointing out the typos. I really liked your KSD paper :).\"}", "{\"title\": \"review of Revisiting Natural Gradient for Deep Networks\", \"review\": \"This paper explores the link between natural gradient and techniques such as Hessian-free or Krylov subspace descent. It then proposes a novel way of computing the new search direction and experimentally shows its value.\\n\\nI appreciate getting additional interpretations of these learning techniques and this is a definite value of the paper. The experiments, on the other hand, are weak and bring little value.\", \"additional_comments\": [\"To my knowledge, Krylov subspace descent is much older than 2012. It is for instance mentioned in Numerical Optimization (Nocedal) or 'Iterative scaled trust-region learning in Krylov subspaces via Peralmutter's implicit sparse Hessian-vector multiply' (Mizutani and Demmel). As it is, the literature looks very deep learning centric.\", \"I appreciate the comment about using different examples for the computation of the Fisher information matrix and the gradient. However, it would be interesting to explore why using the same examples also hurts the training error. This also goes against the fact that using other examples from the training set yields a lower training error than using examples from the unlabeled set.\", \"Figure 1: why do you use a minibatch of size 256 in the first case and a mainibatch of size 384 in the second? This introduces an unnecessary discrepancy.\", \"Figure 1: 'top' and 'bottom' should be 'left' and 'right'.\", \"Figure 2 is not very convincing as the red curve seems to drop really low, much lower than the blue one. Thus, you prove that NGD introduces less variance than MSGD, but you do not prove that the ratio of the importance of each example is different in both cases. Could you use a log scale instead?\", \"Finally, the experimental section is weak. The only 'deep' part, which appears prominently in the title, consists of an experiment ran on one deep model. This is a bit short to provide a convincing argument about the proposed method.\"], \"pros\": \"nice interpretation of natural gradient and connections with existing algorithm\", \"cons\": \"Weak experimental section\\nThe paper could be better written\\nIt feels a bit too 'deep' centric to me, whether in the references or the title despite its little influence on the actual paper.\"}", "{\"review\": \"Thank you for the review. All your points are well taken. The only thing I can say in my defense is that I was worried of the paper being already too long, but you are right. It is important to make the paper clear about what it tries to say. Needless to say I will fix all the typos pointed out (thank you for spotting them).\\n\\nI will also try to add more on the effect of the minibatch size and number of iterations for inverting G. What I can say for now, based on sporadic observations that I've made is that the number of iterations is not that important as long as it is larger than some threshold. The threshold does depend on both the data and model that you use (basically on the error surface). \\n\\n\\nRegarding the minibatch size, my claim is that as long as you have a learning rate in front of your gradient, and you are willing to damp your metric (some of these things might result in slower convergence) you can go to much smaller minibatches. For the NNCG algorithm, going to smaller minibatches seems to hurt. \\n\\nMy understanding of this behaviour is as follows. NNCG (or even NGD with a line search) tries to move at each step as much as it can to minimize the cost on that minibatch. If the minibatch is not sufficiently large, what happens is that you end up overfitting the current minibatch, resulting in overall worse training error and validation error. Doing the linesearch on a different minibatch than the one used to compute the gradient helps to some degree (makes this overfitting to happen less, as the gradient might not point exactly in the directions required to overfit the minibatch), it is important to have a sufficiently large minibatch to have the same statistics as the whole training set (if we only care about optimization). Some of these ideas are also presented in section 7, but I agree that they are in a very condense manner. I will try to expand on this idea, also providing more evidence (extra experiments). \\n\\nI will write back once I've updated the paper with a detail list of changes. It might take me a few days though (especially with the ICML deadline in the background).\"}", "{\"title\": \"review of Revisiting Natural Gradient for Deep Networks\", \"review\": \"The paper contains some interesting analysis showing how certain previous methods\\ncan be viewed as implementations of Natural Gradient Descent.\\n\\nThe paper is novel and is on an important topic, I believe. There are parts where\\nI believe the paper could be clearer (see detailed comments below)- parts could\\nbe expanded upon. I believe the ICLR length guidelines are not strict.\\n\\nThere are also some interesting experiments that compare NG (and its variants)\\nto SGD. (it would be nice to have some more details on where the time is taken\\nin NG, and what happens when you change the size of the subset used to compute\\nthe metric, and the number of iterations used in approximate inversion of G).\\n\\nDetailed comments follow.\\n\\n\\n-----------\\ndensity probability function -> probability density function\\n\\nnot sure how p_{\\theta}(z) relates to DNNs-- what is z? The input features? The class-label output? Both?\\n\\nDefinition of Fisher information matrix?\\n\\nIn Sec. 2.1 when you talk about 'the probabilistic interpreation p_{\\theta}(t | x)', this is very\\nunclear. I figured out what you meant only after looking at the Appendix. In fact, the notation\\nis inherently quite unclear because for me, the canonical example of deep learning is with softmax\\noutput, and here you view t as a multinomial distribution over a single categorical variable,\\nwhich would be a scalar not a vector, so the use of bold-face t is misleading, because it's more\\nlike p_{\\theta}(y | x) where y is the discrete label. I think if\\nyou want people to understand this paper it would make sense to devote a paragraph or two here to\\nexplaining what you mean.\\n\\ncomputed by model -> computed by the model\\n\\n\\nAround Eq. (15), where you talk about the Levenburg-Marquardt heuristic, it was\\nvery unclear to me. I think you need to take a step back and explain what it is\\nand how you are applying it. Otherwise (and this is a risk with other parts of\\nthe paper too) there is a danger that the only people who can understand it are\\nthose who already understand the material so deeply that the paper's conclusions\\nwould already be obvious.\\n\\nbetweent -> between\\n\\nEucladian -> Euclidean\\n\\nbatch model -> batch mode\\n\\nRiebiere -> Ribiere (twice)\\n\\nin Sec. 10-- I don't think you address how many iterations you use in the solver for\\n approximately mutiplying by G^{-1}, and how much of the total time this takes.\\n\\n\\n\\n--\", \"appendix\": \"be obtain -> be obtained\\n\\nshould sum_i^o be instead sum_{i=1}^o ? (this appears more than once)\\n\\nIn the last line of Eq. 27, the notation doesn't really work, you are treating the vector y like a scalar,\\ne.g. dividing by it. This same notation appears in the main text.\\nIt might be necessary to put, say, diag({\\bf p}), and explain that p_i = 1/(y_i (1-y_i)).\\nOr maybe you can point out somewhere in the text that these equations should be interpreted\\nelementwise.\\nIn eqs 28 and 29 you should have non-bold in y_i, and also there is an odd inconsistency of notation with\\neq. 27-- you drop the J [something] J^T notation which I think is still applicable here.\\n\\ncertain extend -> certain extent\\n\\nThe text 'Following the functional manifold interpretation..' with eqs. 30 and 31 in the appendix,\\nis quite unclear. I think you are maybe assuming a deep familiarity with Martens' Hessian-Free\\npaper and/or the Levenburg-Marquardt algorithm.\"}" ] }
-_FVMKvxVCQo1
The return of AdaBoost.MH: multi-class Hamming trees
[ "Balázs Kégl" ]
Within the framework of AdaBoost.MH, we propose to train vector-valued decision trees to optimize the multi-class edge without reducing the multi-class problem to $K$ binary one-against-all classifications. The key element of the method is a vector-valued decision stump, factorized into an input-independent vector of length $K$ and label-independent scalar classifier. At inner tree nodes, the label-dependent vector is discarded and the binary classifier can be used for partitioning the input space into two regions. The algorithm retains the conceptual elegance, power, and computational efficiency of binary AdaBoost. In experiments it is on par with support vector machines and with the best existing multi-class boosting algorithm AOSOLogitBoost, and it is significantly better than other known implementations of AdaBoost.MH.
[ "return", "vector", "hamming", "hamming trees", "framework", "decision trees", "edge", "problem", "binary", "classifications" ]
submitted, no decision
https://openreview.net/pdf?id=-_FVMKvxVCQo1
https://openreview.net/forum?id=-_FVMKvxVCQo1
ICLR.cc/2014/conference
2014
{ "note_id": [ "TYV3arI6LeT82", "J3zMNF85uKJRd", "ly7Ep3gRItHdk", "7AnJAci6tB71g", "llsYh8vU1Lhd5", "65bxfeIlx7fo1" ], "note_type": [ "review", "comment", "review", "comment", "review", "comment" ], "note_created": [ 1392031020000, 1392769320000, 1391695080000, 1392769020000, 1391935860000, 1392769320000 ], "note_signatures": [ [ "anonymous reviewer 0c59" ], [ "Balazs Kegl" ], [ "anonymous reviewer 98f6" ], [ "Balazs Kegl" ], [ "anonymous reviewer 3cb5" ], [ "Balazs Kegl" ] ], "structured_content_str": [ "{\"title\": \"review of The return of AdaBoost.MH: multi-class Hamming trees\", \"review\": \"The author presents a way to train vector-valued decision trees (Hamming trees) in the context of multi-class AdaBoost.MH, as one way of not doing K binary one-vs-all classifications. The idea is a clever factorization that allows for efficient modeling and predictions. The paper seems to be performing very thorough validation, comparisons and hyper-parameter selection, which is obviously a good thing.\\n\\nUnfortunately, I cannot speak with any authority as to whether the algorithmic contributions of this paper are substantial or not. While the results do seem to support the idea that the proposed method, AdaBoost.MH + Hamming trees, works as well or better than SVMs/other AdaBoost methods, I cannot evaluate in earnestness whether this is a very novel contribution or incremental, since recent developments in boosting are not exactly my area of expertise.\"}", "{\"reply\": \"'The main issue I found is how this algorithm scales: boosted trees are used for large scale problems, and addressing this issue is critical for significance. The authors only report large scale experiments in very vague terms, without any comparisons or training times (the authors do not report any numerical results and do not give reference. I do not know what to make for instance of the claim that it won the recent Interspeech challenge).'\\n\\nThere is a clickable hyperlink in the paper pointing to the site (Emotion Sub-Challenge):\", \"http\": \"//emotion-research.net/sigs/speech-sig/is13-compare\\n\\nThe Yahoo results are in the proceedings of Chapelle et al., but we also have a full MLJ paper on it (I added the reference).\\n\\n'The cost per iteration appears linear in the number of features, examples and dimensions, which is standard, but what about the number of iterations? My experience is that discrete Adaboost is quite slow on larger datasets compared to real-value prediction or gradient-based boosting.'\", \"all_number_of_iterations_are_reported_in_the_supplementary\": \"\", \"https\": \"//www.lri.fr/~kegl/research/revised1.pdf\\n\\nIt will appear on arxiv once I figure out the latex bug which blocks the compilation.\", \"accelerating_training_is_possible_using_tricks_borrowed_from_random_forests\": \"subsample the data points (stochastic boosting) or the features (called LazyBoost, usually used with large feature spaces, such as Haar filters on images or text classification). Sometimes these randomization steps even improve generalization; we are in the process of evaluating these effects on large-scale experiments.\\n\\n- 'Authors should better explain the weak learning condition.'\\n\\nI usually get the remark from boosting experts that these are known and trivial results and that I should not repeat them. I'm actually happy to comply with this request. I added a new subsection on the algorithmic convergence of AdaBoost.MH. It's slightly longish and tutorialish, but I give ample warning to the reader that it is not novel, and that it is not necessary to read it for understanding the novelty of the paper. \\n\\n- In section 3, there is a single alpha, so the subscript j in \\u201calpha_j\\u201d must be a typo. \\n\\nI deleted it (although note that the base classifier does return an alpha for each cut).\\n\\n- Section 3: choice of font for h_j should be consistent.\", \"it_is\": \"h_j is a simple classifier, and Gothic h_j denotes a node classifier (which is recursive). If I missed this logic somewhere, please point it out where exactly.\\n\\n- I do not see how one can get O(nKd log(N)) if the tree is balanced: I assume one still has to run TREEBASE, which calls BASE N times, and BASE is O(nKd)??? \\n\\nIt does but the number of points in each node is smaller than n. More precisely, each tree level partitions the data set, and since stump algorithm is linear, each _level_ costs O(nkd).\\n\\n- Section 2.2: the case against Adaboost.M1 used in a multiclass setting that would lead to an error rate of (K-1)/K sounds strange. I thought one would use K separate Adaboost.M1 in a 1-vs-other setting?\\n\\nThat would be a possibility, but it is not what Adaboost.M1 is doing: it uses single-label but multi-class trees with multi-class error < 50%.\", \"note_that_i_uploaded_the_new_version_temporarily_here\": \"\"}", "{\"title\": \"review of The return of AdaBoost.MH: multi-class Hamming trees\", \"review\": \"The paper describes an Adaboost algorithm, with tree based learners, adapted to the multi-class setting. The claim is that direct multi-class algorithms cannot be used to grow tree base learners in Adaboost. The author proposes a factorization of the classifier into a product of 3 terms, one performing a binary separation of the input space, a second one projecting this binary value onto the classes and the third being a confidence term. With this formulation, base tree learners can be grown using the binary separator corresponding tot the first term. Experiments are described on a series of UCI data sets and reach good results compared to a series of baselines.\\nThe idea of factorizing the multiclass classifier is interesting and seems to lead to good results \\u2013 even if the experiments have been performed on rather small size problems. The paper is relatively clear, but could be improved. In many places, there are shortcuts which will probably be hard to follow for non-specialists of multi-class Adaboost. Also there should be a description of the alternative multi-class Adaboost methods used in the experimental comparison, in order to appreciate the originality of this new proposition.\\nOverall, I think that this is a good paper. On the other hand, I am not sure that it is adapted to ICLR. It is not concerned with representation learning and would probably be better suited to a more general ML conference.\"}", "{\"reply\": \"'Also there should be a description of the alternative multi-class Adaboost methods used in the experimental comparison, in order to appreciate the originality of this new proposition.'\\n\\nFull description is out of the scope of this paper. The main difference is that they all use multi-class (but single-label) trees.\"}", "{\"title\": \"review of The return of AdaBoost.MH: multi-class Hamming trees\", \"review\": \"In the last 10 years, boosted trees have been considered as the best performing algorithm for large scale data with numerical features (though this may change with deep learning). However, why they perform so well has never been fully understood, especially as the boosting and the tree parts of the algorithm tend to be optimized separately. This is all the more true in the multi-class setting, where boosted tree implementations are full of ad-hoc patches.\\n\\nThis paper offers a model where the construction of the tree is part of the boosting algorithm in a full multi-class setting, and I assume this is novel. It matches the performance of SVMs on small tasks where they were traditionally superior to boosting, and the performance of the best previous multi-class booting implementation. This is a well written paper of high technical quality with extensive and rigorous experiments, and effective descriptions of the algorithm implementations.\", \"the_main_issue_i_found_is_how_this_algorithm_scales\": \"boosted trees are used for large scale problems, and addressing this issue is critical for significance. The authors only report large scale experiments in very vague terms, without any comparisons or training times (the authors do not report any numerical results and do not give reference. I do not know what to make for instance of the claim that it won the recent Interspeech challenge). The cost per iteration appears linear in the number of features, examples and dimensions, which is standard, but what about the number of iterations? My experience is that discrete Adaboost is quite slow on larger datasets compared to real-value prediction or gradient-based boosting.\\n\\nThis paper packs a lot of technical contributions in 9 pages, to the point that the authors had to take a few short cuts that impact clarity, but I would only recommend to cut comments that are not understandable by non-expert in boosting, or expand them (for instance all the references to the weak learning condition, which is never explained).\\n\\nI noticed there was not a single boosting paper at ICLR\\u201913, while this is probably still the most widely used class of machine learning algorithms. Through the greedy selection of weak classifier, boosting offers a way to learn representation that is different from deep learning, but highly practical for many problems.\\nThe same author has submitted another paper that targets specifically the choice of the features for boosting (correlation-base construction of neighborhood and edge features), however, I find this paper the most worthy to be accepted.\", \"detailed_comments\": [\"Authors should better explain the weak learning condition.\", \"In section 3, there is a single alpha, so the subscript j in \\u201calpha_j\\u201d must be a typo.\", \"Section 3: choice of font for h_j should be consistent.\", \"I do not see how one can get O(nKd log(N)) if the tree is balanced: I assume one still has to run TREEBASE, which calls BASE N times, and BASE is O(nKd)???\", \"Section 2.2: the case against Adaboost.M1 used in a multiclass setting that would lead to an error rate of (K-1)/K sounds strange. I thought one would use K separate Adaboost.M1 in a 1-vs-other setting?\"]}", "{\"reply\": \"'The main issue I found is how this algorithm scales: boosted trees are used for large scale problems, and addressing this issue is critical for significance. The authors only report large scale experiments in very vague terms, without any comparisons or training times (the authors do not report any numerical results and do not give reference. I do not know what to make for instance of the claim that it won the recent Interspeech challenge).'\\n\\nThere is a clickable hyperlink in the paper pointing to the site (Emotion Sub-Challenge):\", \"http\": \"//emotion-research.net/sigs/speech-sig/is13-compare\\n\\nThe Yahoo results are in the proceedings of Chapelle et al., but we also have a full MLJ paper on it (I added the reference).\\n\\n'The cost per iteration appears linear in the number of features, examples and dimensions, which is standard, but what about the number of iterations? My experience is that discrete Adaboost is quite slow on larger datasets compared to real-value prediction or gradient-based boosting.'\", \"all_number_of_iterations_are_reported_in_the_supplementary\": \"\", \"https\": \"//www.lri.fr/~kegl/research/revised1.pdf\\n\\nIt will appear on arxiv once I figure out the latex bug which blocks the compilation.\", \"accelerating_training_is_possible_using_tricks_borrowed_from_random_forests\": \"subsample the data points (stochastic boosting) or the features (called LazyBoost, usually used with large feature spaces, such as Haar filters on images or text classification). Sometimes these randomization steps even improve generalization; we are in the process of evaluating these effects on large-scale experiments.\\n\\n- 'Authors should better explain the weak learning condition.'\\n\\nI usually get the remark from boosting experts that these are known and trivial results and that I should not repeat them. I'm actually happy to comply with this request. I added a new subsection on the algorithmic convergence of AdaBoost.MH. It's slightly longish and tutorialish, but I give ample warning to the reader that it is not novel, and that it is not necessary to read it for understanding the novelty of the paper. \\n\\n- In section 3, there is a single alpha, so the subscript j in \\u201calpha_j\\u201d must be a typo. \\n\\nI deleted it (although note that the base classifier does return an alpha for each cut).\\n\\n- Section 3: choice of font for h_j should be consistent.\", \"it_is\": \"h_j is a simple classifier, and Gothic h_j denotes a node classifier (which is recursive). If I missed this logic somewhere, please point it out where exactly.\\n\\n- I do not see how one can get O(nKd log(N)) if the tree is balanced: I assume one still has to run TREEBASE, which calls BASE N times, and BASE is O(nKd)??? \\n\\nIt does but the number of points in each node is smaller than n. More precisely, each tree level partitions the data set, and since stump algorithm is linear, each _level_ costs O(nkd).\\n\\n- Section 2.2: the case against Adaboost.M1 used in a multiclass setting that would lead to an error rate of (K-1)/K sounds strange. I thought one would use K separate Adaboost.M1 in a 1-vs-other setting?\\n\\nThat would be a possibility, but it is not what Adaboost.M1 is doing: it uses single-label but multi-class trees with multi-class error < 50%.\", \"note_that_i_uploaded_the_new_version_temporarily_here\": \"\"}" ] }
kziQtP-nGqzDb
Learning Human Pose Estimation Features with Convolutional Networks
[ "Ajrun Jain", "Jonathan Tompson", "Mykhaylo Andriluka", "Graham Taylor", "Christoph Bregler" ]
This paper introduces a new architecture for human pose estimation using a multi- layer convolutional network architecture and a modified learning technique that learns low-level features and higher-level weak spatial models. Unconstrained human pose estimation is one of the hardest problems in computer vision, and our new architecture and learning schema shows significant improvement over the current state-of-the-art results. The main contribution of this paper is showing, for the first time, that a specific variation of deep learning is able to outperform all existing traditional architectures on this task. The paper also discusses several lessons learned while researching alternatives, most notably, that it is possible to learn strong low-level feature detectors on features that might even just cover a few pixels in the image. Higher-level spatial models improve somewhat the overall result, but to a much lesser extent then expected. Many researchers previously argued that the kinematic structure and top-down information is crucial for this domain, but with our purely bottom up, and weak spatial model, we could improve other more complicated architectures that currently produce the best results. This mirrors what many other researchers, like those in the speech recognition, object recognition, and other domains have experienced.
[ "convolutional networks", "new architecture", "features", "human pose estimation", "modified learning technique", "weak spatial models", "hardest problems" ]
submitted, no decision
https://openreview.net/pdf?id=kziQtP-nGqzDb
https://openreview.net/forum?id=kziQtP-nGqzDb
ICLR.cc/2014/conference
2014
{ "note_id": [ "GG90xWdBQuxkg", "-kMX-SbGYskJh", "qvnm_NL480_sI", "6auq8WBXjUEHY", "vHcovKyYMR3lw", "IBlASQ-hAuGJz", "ksrLsy6nSDtBq", "pzXqzYTC2nzdD", "YGNn9-ftL60CE" ], "note_type": [ "review", "review", "review", "comment", "review", "review", "review", "comment", "review" ], "note_created": [ 1392757800000, 1390243320000, 1391422440000, 1392757620000, 1391830620000, 1391483700000, 1390243320000, 1392757740000, 1390243320000 ], "note_signatures": [ [ "Arjun Jain" ], [ "沈杰" ], [ "anonymous reviewer 8a35" ], [ "Arjun Jain" ], [ "anonymous reviewer ae8c" ], [ "anonymous reviewer 41e4" ], [ "沈杰" ], [ "Arjun Jain" ], [ "沈杰" ] ], "structured_content_str": [ "{\"review\": \"Thank you for your detailed review of our paper. To address your concern regarding the impact of our spatial model on the final results, we would like to emphasise that the spatial model was designed as a post-processing step to reduce false positive detections for difficult joint locations. While the impact was not great on the shoulder and elbow joints, the spatial model does help improve wrist location accuracy, which is by far the most difficult task owing to large amounts of occlusion, deformation and variation in pose. For future work we would like to incorporate these spatial priors into the neural network and increase their complexity (by incorporating low level image features into the spatial prior), which we believe should improve the efficacy of the pose prior.\\n\\n\\u201cI find it surprising, that this makes much of a difference, because I would have thought that the peak response of the sliding window conv net be pretty much at the same location, with or without a lot of spatial pooling\\u201d\\n\\nIt is true that the peak response (or location of the heat map\\u2019s maximal lobe) should be approximately the same when using larger amounts of spatial pooling. However, we have found experimentally that large pooling factors produce an overly smooth response, which makes accurate detection of small image features difficult for the second stage of our detection pipeline (e.g. our spatial prior). For future work we would like to investigate the use of other pooling strategies, such as overlapped pooling and other recent techniques.\\n\\n\\u201cUsing sliding windows with a conv net sounds like it will be slow. Could you say something about the efficiency as compared to the other models?\\u201d\\n\\nWe have included a discussion about relative efficiency of our model in our latest paper revision. It should be noted that at test time we can speed up computation by running the first 3 convolutional layers over the entire image to reduce redundant computation of image features. Then only the fully connected stage need be run as a sliding window over the input image. This approach is actually very similar to the recent work by Sermanet et al., OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks (http://arxiv.org/abs/1312.6229), where they use 1x1 convolution kernels to describe the fully connected stages so that they can share the fully connected network weigths over the full detection output.\", \"we_also_added_timing_measurements_to_the_latest_version_of_the_paper_as\": \"\", \"theano_training_takes\": \"1.9ms per patch (for FPROP and BPROP) = 41min total\\nTheano testing (parallelized on cpu cluster) takes: 0.49sec per image (0.94x scale) = 2.8min total\\nNMS and spatial model takes very little time (not really worth profiling)\"}", "{\"review\": \"looks interesting!\"}", "{\"title\": \"review of Learning Human Pose Estimation Features with Convolutional Networks\", \"review\": \"My review refers to the most recent arXiv revision (#3) at the time I downloaded papers for review.\\n\\nSummary\\n\\nThis paper applies convolutional neural networks to the task of predicting upper-body keypoints (face, shoulder, elbow, wrist) in static RGB images. The approach trains one ConvNet per keypoint (all with the same architecture) for the task of deciding if the center pixel of an image window is the location of the target keypoint. A spatial model (a simple chain connecting face-shoulder-elbow-wrist) is estimated to provide a prior between locations of adjacent keypoints. At test-time, the ConvNets are run in a multi-scale, sliding-window fashion over the test image. The \\u201cunaries\\u201d from the ConvNet keypoint detectors are then filtered using the prior.\\n\\nMuch recent work on pose estimation in static RGB images has focused on combining HOG-based part detectors via a spatial model. These models are often enriched with local mixture models. Yang & Ramanan and Sapp & Taskar are popular recent examples. This is one of the first papers that uses ConvNets within this \\u201cparts and springs\\u201d paradigm. A paper similar in spirit was posted to arXiv slightly before this paper was submitted (\\u201cDeepPose\\u201d by Toshev and Szegedy http://arxiv.org/pdf/1312.4659v1.pdf). While too new to require a comparison, I list it here for completeness.\\n\\nNovelty and Quality\\n\\nWhile ConvNets have been used for pose estimation in previous work (as properly referenced in this paper), the current generation of ConvNets (following from Krizhevsky et al.\\u2019s work) have not been tried on the current generation of human pose datasets (e.g., FLIC). While the technique is not very novel, the proposal and investigation are good to see. The paper is well written. However, the experimental evaluation is confusing (unclear baseline methods, unclear if the subsets of images used are the same across methods) and computation employed for spatial modeling seems odd (more specific comments follow).\\n\\nPros\\n\\n+ It\\u2019s good to see an investigation of ConvNets into pose estimation on modern datasets like FLIC.\\n+ The paper is well written and easy to follow.\\n+ The proposed architecture is similar to existing ones based on HOG, but with HOG filters replaced with ConvNets, making for an interesting comparison.\\n\\nCons\\n\\n- I found details of the experimental comparison on FLIC lacking (I\\u2019ll be specific below).\\n- The abstract and intro lead one to believe that the results on FLIC are going to much, much better than prior work, yet they only look marginally better.\\n- The DPM baseline on FLIC doesn\\u2019t make sense (details below).\\n- The choices made in the spatial model needs to be explained more.\\n\\nDetails questions and comments\\n\\nSec. 2: Shakhnarovich et al. [37] do not use HOG [12] features. Note that [37] predates HOG [12] by a few years.\\n\\nSec. 3.1: Please be more specific about the form of LCN used.\", \"footnote_1\": \"\\u201cthe the\\u201d typo\", \"unnumbered_equation_bottom_of_page_5\": \"I might be confused by the notation and terse explanation, but I think this should be (p_u|i=0 * p_u). More generally, the computation needs to be explained/justified more. Given a chain like this, one would typically compute the marginal likelihood of a keypoint at each location using dynamic programming (same as sum-product on a tree/chain). Here, it seems that when computing the \\u201cmarginal\\u201d for the shoulder, the wrist is completely ignored. This seems very strange and ad hoc--given all of the literature on pictorial structure models, why implement this odd variant?\\n\\nExperimental setup / DPM comparison:\\n\\nSec. 4: \\u201cFollowing the methodology of Felzenszwalb et al. [16]...\\u201d Felzenszwalb et al. does not deal with pose estimation or propose a methodology for this dataset. There is some confusion here. \\n\\nIf only 351 images were used (instead of 1016) how did you compare with MODEC? Eyeballing the plots, they appear to be the same as in the MODEC CVPR 2013 paper, which from what I can tell used all 1016 test images. Is this an apples-to-apples comparison?\\n\\nIn Figure 6, how is the DPM baseline implemented? DPM [16] was not designed to do pose estimation, so how did you modify it to estimate pose in this work? What data was it trained on?\"}", "{\"reply\": \"Firstly, thank you for your thoughtful insights and detailed comments. At a high level, we have made the explanation about the experimental setup and the spatial prior model clearer following your advice. In particular, we have added a discussion of the choices we made regarding experiments using the FLIC dataset and elaborated on the fairness of our evaluation criteria.\", \"in_response_to_your_specific_concerns\": \"\\u201cShakhnarovich et al. [37] do not use HOG [12] features. Note that [37] predates HOG [12] by a few years. \\u201c\\n\\nSorry for this oversight. We have corrected this in the latest draft.\\n\\n\\u201cPlease be more specific about the form of LCN used.\\u201d\\n\\nThe LCN normalization was based on standard techniques described by Jarrett et al. (What is the best multi-stage architecture for object recognition?). Our LCN is a 2 layer module comprised of a local subtractive normalization followed by a local divisive normalization. The local subtractive normalization stage subtracts the local mean value (calculated by convolving the input with a 9x9 Gaussian kernel) from each input pixel. Likewise, the local divisive normalization divides each pixel by the standard deviation of the local 9x9 pixel window. A divisive threshold of 1e-4 was used to prevent over-emphasis of input noise and division by zero. We have added these details to the latest version on Arxiv.\\n\\n\\u201cUnnumbered equation bottom of page 5: I might be confused by the notation and terse explanation, but I think this should be (p_u|i=0 * p_u).\\u201d\\n\\nSorry for our overly terse explanation. We have added a more thorough discussion in the latest version.\\n\\nThe equation describes standard sum-product belief propagation, however perhaps our notation made this confusing. The biggest difference from standard literature is that we don't assume a Gaussian distribution for the pairwise terms, but have more flexible non-parametric representation based on the histograms of relative position occurrences in the training data. Furthermore, we formulate these terms as convolutional priors, which avoid having to learn a distribution for every pixel location. As indicated in figure 3, we do incorporate message terms from adjacent nodes in the graph (where the likelihood term from the shoulder to face nodes being a notable exception). For example, when calculating the marginal for the shoulder term, the wrist location is accounted for in the elbow message.\\n\\n\\u201cSec. 4: \\u201cFollowing the methodology of Felzenszwalb et al. [16]...\\u201d Felzenszwalb et al. does not deal with pose estimation or propose a methodology for this dataset. There is some confusion here. \\u201d\\n\\nSorry for the confusion. This was actually an typographic error and has been rectified in the latest draft. We actually follow the methodology of Sapp et al. [36], not Felzenszwalb et al as stated. In particular we used their error metric, evaluation code and their test-set. We deviate from their methodology in one important way: we use a 351 image subset of the test set. This subset contains the images that only contain a single person. The motivation for doing so follows the fact that our detector will give a positive detection for all persons in the image, while the ground-truth labels exist for only a single person in the image (chosen arbitrarily).\\n\\n\\u201cIf only 351 images were used (instead of 1016) how did you compare with MODEC? Eyeballing the plots, they appear to be the same as in the MODEC CVPR 2013 paper, which from what I can tell used all 1016 test images. Is this an apples-to-apples comparison?\\u201d\\n\\nWe use the same 351 image subset when evaluating all models, including MODEC, and we have made this clear in the latest paper revision. While the MODEC results appear similar to the CVPR 2013 paper when inspecting the plots, they are actually slightly different.\\n\\n\\u201cIn Figure 6, how is the DPM baseline implemented? DPM [16] was not designed to do pose estimation, so how did you modify it to estimate pose in this work? What data was it trained on?\\u201d\\n\\nAs you correctly pointed out, DPM is designed for detection and so we apply it to detect key-points in the skeleton (the same keypoints used to train our convnet based detector). Furthermore, DPM is trained on exactly same training data as our convnet (3987x2 images) and tested on the same test set; as such we believe that we have directly and fairly compared both keypoint-based detectors. The use of DPM in this manner is similar to the ICCV\\u201913 paper of Pishchulin et al. (Strong Appearance and Expressive Spatial Models for Human Pose Estimation), which uses DPM as a unary likelihood model for keypoint detection.\"}", "{\"title\": \"review of Learning Human Pose Estimation Features with Convolutional Networks\", \"review\": \"The paper proposes an architecture that takes as input an image and outputs the locations of human body parts (face, shoulder, elbow, wrist). The architecture consists of two parts, of which the second part, however, does not contribute much to classification accuracy (on the data on which the model was tested).\\n\\nThe first part is a sliding window detector using a binary-output convolutional network. The networks uses smaller pooling regions than what is conventional, in order to retain a high degree of spatial precision, and is otherwise not different from commonly used networks. I find it surprising, that this makes much of a difference, because I would have thought that the peak response of the sliding window conv net be pretty much at the same location, with or without a lot of spatial pooling (especially since you use non-maximum suppression). \\n\\nThe second part of the architecture is a graphical model (Markov chain) that represents a prior over relative spatial location of the different body parts. It is used to clean up the conv net detections. Unfortunately (but this is also an interesting finding) it does not help much. \\n\\nI find the first sentence in section 3.2 strange. Why do you care about false-positives? Or put another way, why don't you increase the detection threshold? It seems like you should really only care about the complete ROC curve. But then, as you show later, the prior you propose here doesn't help much to fix it. \\n\\nUsing sliding windows with a conv net sounds like it will be slow. Could you say something about the efficiency as compared to the other models? Sapp et al., for example, seem to show that MODEC is not only fairly accurate but also fast (well, as compared to DPM). Apologies, in case this is discussed somewhere and I overlooked it. \\n\\nIn Section 4.1 the references to the Figures are wrong. \\n\\nThis is yet another paper showing that conv nets work well in tasks previously dominated by more complicated vision architectures. The paper has some minor issues as pointed out, but overall I enjoyed reading it.\"}", "{\"title\": \"review of Learning Human Pose Estimation Features with Convolutional Networks\", \"review\": \"This paper examines a way to use convolutional neural networks to estimate human pose features. As many people in the community know, convolutional neural networks have had a major impact in the ImageNet object recognition evaluations. This paper looks at using them for a restricted setting of the challenging problem of human pose estimation. This is clearly an interesting direction to explore, and the details of how one does so are important - as noted by the authors.\\n\\nAt a high level it seems the main idea and contribution here involves the use of a simple chain structured model to capture spatial relationships between some key parts, namely: faces, shoulders, elbows and wrists. I think the presentation of the spatial model in Figure 3 could be a little cleaner and clearer. Basically it seems like the paper is coming up against the classic problem of how to combine local activity maps with some form of spatial prior or spatial model for how parts fit together. This type of issue has come up a lot in vision in many contexts and a number of approaches have been proposed on how to address the issue within a common theoretical framework, ex CRFs. Here it seems the approach has been to treat the output of the binary predictions for part locations from the CNN as a form of likelihood term that interacts with a prior that has been encoded through the discrete distributions of what seems to be essentially a linear chain Bayesian Network structure. The part that seems like it doesn\\u2019t quite match up is the fact that the prior is encoded within the conditionals of the discrete distributions of the Bayesian network while the likelihood is actually the result of the CNNs prediction for a set of binary decisions arising from the sliding window setup.\\n\\nIn general the paper presents some promising results, I think the theoretical framework could be cleaned up a little, but the ideas and results are going in a good direction.\"}", "{\"review\": \"looks interesting!\"}", "{\"reply\": \"Thank you for your comments and suggestions and we appreciate your recognition of the difficulty of the human body pose detection problem. Given the challenge of this problem we hope that the implementation details in our paper will help others apply convolutional networks to this problem domain.\\n\\n\\u201cI think the presentation of the spatial model in Figure 3 could be a little cleaner and clearer.\\u201d\\n\\nSince receiving the reviewers\\u2019 feedback, we have worked to address any confusion surrounding our implementation of the spatial prior and provide a more in-depth discussion of the theoretical framework. In particular, we believe that the current version of the paper now better links our implementation with that of standard sum-product belief propagation and explains the choices we made when formulating this prior. Please see our above response to reviewer \\u201cAnonymous 8a35\\u201d for further clarification.\\n\\n\\u201cThe part that seems like it doesn\\u2019t quite match up is the fact that the prior is encoded within the conditionals of the discrete distributions of the Bayesian network while the likelihood is actually the result of the CNNs prediction for a set of binary decisions arising from the sliding window setup.\\u201d\\n\\nRather than giving the prior or likelihood designation, we think the easiest way to interpret the chain-structured spatial model is that of the convnet providing the unary term and the training set statistics providing the pairwise term.\"}", "{\"review\": \"looks interesting!\"}" ] }
kkgljR8O6hjHA
EXMOVES: Classifier-based Features for Scalable Action Recognition
[ "Du Tran", "Lorenzo Torresani" ]
This paper introduces EXMOVES, learned exemplar-based features for efficient recognition of actions in videos. The entries in our descriptor are produced by evaluating a set of movement classifiers over spatial-temporal volumes of the input sequence. Each movement classifier is a simple exemplar-SVM trained on low-level features, i.e., an SVM learned using a single annotated positive space-time volume and a large number of unannotated videos. Our representation offers two main advantages. First, since our mid-level features are learned from individual video exemplars, they require minimal amount of supervision. Second, we show that simple linear classification models trained on our global video descriptor yield action recognition accuracy approaching the state-of-the-art but at orders of magnitude lower cost, since at test-time no sliding window is necessary and linear models are efficient to train and test. This enables scalable action recognition, i.e., efficient classification of a large number of different actions even in large video databases. We show the generality of our approach by building our mid-level descriptors from two different low-level feature representations. The accuracy and efficiency of the approach are demonstrated on several large-scale action recognition benchmarks.
[ "features", "exmoves", "scalable action recognition", "large number", "efficient recognition", "actions", "videos", "entries", "descriptor" ]
submitted, no decision
https://openreview.net/pdf?id=kkgljR8O6hjHA
https://openreview.net/forum?id=kkgljR8O6hjHA
ICLR.cc/2014/conference
2014
{ "note_id": [ "7ZJu769dwD72O", "bSWoSyUOoLbMr", "78tW7PNx028b_", "1uSbuOvDbOKVF", "BsgHBqmdHqB4I", "SSBu-4TLuQxWJ" ], "note_type": [ "review", "review", "review", "comment", "comment", "review" ], "note_created": [ 1391728680000, 1391902560000, 1391481300000, 1392682260000, 1392680940000, 1392682140000 ], "note_signatures": [ [ "anonymous reviewer 4c79" ], [ "anonymous reviewer 9716" ], [ "anonymous reviewer c3e9" ], [ "Lorenzo Torresani" ], [ "Lorenzo Torresani" ], [ "Lorenzo Torresani" ] ], "structured_content_str": [ "{\"title\": \"review of EXMOVES: Classifier-based Features for Scalable Action Recognition\", \"review\": \"The paper describes a mid-level representation for videos that can be faster than existing representations and yields similar performance. The idea is to train many SVMs to detect predefined action instances on sub-blocks from the video, and then to aggregate the SVM responses into a representation for the whole video.\\n\\nThis work seems like a fairly straightforward extension of previous similar work that was done on images (Malisiewicz et al.), but there are some technical differences like the use of an integral video trick to compute SVM responses fast, which seems nice.\\n\\nI don't really understand the mining of negative examples for training the exemplar SVMs. Why is it not possible to train the SVM, say with stochastic gradient descent, on all or many negative examples?\\n\\nThe method relies on extra, intermediate labels for training which are not used by bag-of-words. This makes it hard to judge the performance differences between the two and seems to be unfair towards bag-of-words. \\n\\n3.3: Rather than learning to scale svm confidences within a sigmoid, why not train a regularized logistic regression classifier in the first place, instead of the svm?\\n\\nThe feature extraction time is reported as the total on the whole UT-I dataset. It would be much better to report it in frames per second to make it comparable with other datasets without having to dig up the description of this dataset. If I am not mistaken it amounts to approximately 5 frames (with more or less standard resolution) per second? If correct, this means that despite the improvement over action bank, the bottleneck is really feature exctraction not classification. So the speed up due to the linear classifier will be swamped by the feature extraction and is not really that relevant, unless I'm missing something.\", \"pro\": [\"Well-written, uses some nice engineering tricks like the integral video for computing SVM responses.\"], \"neg\": [\"The paper seems like a slightly strange fit for this conference as it describes a vision system rather than investigating the learning of representations. That intermediate labels are useful on this data is well-known (and unsurprising). The paper does propose a faster way to use them, which is probably worthwhile.\"]}", "{\"title\": \"review of EXMOVES: Classifier-based Features for Scalable Action Recognition\", \"review\": \"This paper proposes a novel method for human activity recognition in video. The main properties it seeks are: 1) developing invariance to the various types of intra-class variation; 2) efficiency in training - both in terms of complexity and minimizing human involvement, 3) efficiency at test time (e.g. able to handle YouTube-scale collections). The main idea is to train, from typical low-level bag-of-visual-words features, a series of exemplar-SVMs which learn on a single positive example and many negative examples, and use the collective output of the SVMs at multiple scales and positions as a mid-level descriptor. The final stage of the method is a linear SVM classifier. The method performs comparably, or better than other recent methods on three modern datasets: HMDB51, Hollywood2, UCF50/101, moreover it is up to two orders of magnitude more efficient than the other methods.\\n\\nThe main contribution is a simple yet effective method that can perform activity recognition on large video databases. It would be easy to re-implement, though I hope the authors release a baseline implementation. As the idea of developing invariance to intra-class variability is a key consideration in the motivation of the work, I'd like to see more discussion on this. To me, the fact that the SVMs are trained on single exemplars is an interesting feature and certainly important when human labeling is expensive. However, this seems at odds with learning invariance. For example, if we only ever see one example of someone performing a tennis swing, will this not make it much more difficult to recognize the tennis swing under different conditions: clothing, body type, scale, etc.?\\n\\nPositives\\n* Paper is well written and the description of the method is clear. In particular, the presentation of Algorithm 1 and its associated description in the text is good.\\n* As stated earlier, it's a simple and effective method, and I see others using this as a baseline\\n* Datasets considered are modern and challenging\\n* Speedup over other methods is impressive: key to this is the use of the integral video, a nice idea\\n\\nNegatives\\n* Paper is very specific to an application and type of dataset, may not be as of wide of interest as some other papers\\n* Related to above, the representation extracted likely would not be useful to other AI problems (e.g. those with structured output)\\n* Method still requires first stage of engineered feature extraction pipeline\\n\\nOverall, while it may not have the broad appeal across ICLR, I think this is a good paper which clearly presents an effective method for activity recognition.\\n\\nQuestions & Comments\\n====================\\nEmphasize at the beginning that 'exemplars' do not necessary have to be 1 per activity type. This was the impression I got at the beginning of the paper, but the description of the experiments (e.g. 188 exemplars and 50 actions with multiple exemplars per action) made this clear. \\n\\nAs mentioned, there is still some human effort in finding the 'negative' examples. For example, one needs to watch a video to determine for sure that it does not contain any tennis. Perhaps you could use tags or some other means to propose negative videos that you are very confident don't contain a particular activity. In that case, how sensitive would the exemplar-SVM be to label noise (if the negative videos contained volumes semantically very similar to that of the exemplar)?\\n\\nThe final dimensionality of the EXMOVE descriptor is 41,172 which is going to benefit the linear SVM. Is the dimensionality of the other methods comparable? You may want to indicate this somewhere, say in Table 1.\\n\\nIt would be nice to see Table 1 contain more low-level feature/mid-level descriptor pairs (e.g. the outer product of low-level features and mid-level descriptors). \\n\\nWith respect to the comment 'we were unable to test [Action Bank] on the large-scale datasets of Hollywood-2 and UCF101. For Hollywood2, I can understand that it's the case. But would UCF101 not just be roughly 2x the time to train on UCF50? Can you clarify this? I realize it may have just not been a situation of not enough time before the deadline to run the experiment, so could this be reported in the final version of the paper?\"}", "{\"title\": \"review of EXMOVES: Classifier-based Features for Scalable Action Recognition\", \"review\": \"This paper explores a computationally efficient way to learn an intermediate representation for action recognition. The technique is somewhat inspired by the approach of \\u2018action bank\\u2019 but focuses on a much more computationally efficient way to achieve a similar effect. The technique takes advantages of \\u2018integral videos\\u2019 and has the flavor of a modern day Viola & Jones type of technique for recognizing activities in a way similar to the classic technique for face detection. I really like the fact that the authors have taken the issue of computational complexity seriously here. While some might argue that once the community has found techniques that work very well, one could focus on optimizing them for faster run-time performance. However, some choices imply complexity differences that would be very difficult to address in practical working systems and this paper starts by using a technique as a building block that is pretty close to practical right now.\\n\\nThe work here is also capable of learning intermediate representations with a particularly small amount of data, i.e. single exeamples of classes of interest and lots of negative examples, due to the use of Malisiewicz et al,\\u2019s exemplar-SVM technique. This could have advantages in a number of practical situations.\\n\\nI think this paper is both fairly well presented and executed in terms of the experiment work. Importantly, it addresses some key practical issues that deserve more attention. The results are not absolutely at the state of the art, but the value added by this work is quite good, especially for those working in the area of action recognition where data processing considerations are particularly challenging.\"}", "{\"reply\": \"We thank the reviewer for highlighting the two main contributions of our approach over prior work: computational efficiency and manual annotation parsimony.\"}", "{\"reply\": [\"We thank the reviewer for the thoughtful suggestions. We would like to comments on a few points of the review.\", \"We believe that it is quite apparent that our approach is not a straightforward extension of [Malisiewicz et al.,2011]. This prior work has not been applied to videos, just to object detection in still images. A naive application of [Malisiewicz et al.,2011] to videos is simply not feasible because of the prohibitive cost. In our paper we describe how to adapt it to work efficiently on videos so that it can scale to large datasets. This is not a trivial contribution, as partly acknowledged by the reviewer.\", \"The mining of hard negatives is a standard strategy in the learning of exemplar SVMs. Having said this, the reviewer makes an excellent suggestion in proposing the use of stochastic gradient descent on the entire negative set. This is definitely an interesting experiment for future work. However, our expectation is that results would be quite similar as those obtained with iterative hard negative mining since only examples violating the margin (i.e., the hard negatives) would contribute to refining the parameters in stochastic gradient descent.\", \"We agree with the reviewer that regularized logistic regression may be a sensible alternative to the two-step learning of the SVMs and the sigmoids. Again, we opted for the simple two-step solution as it has been proven to work effectively in several prior systems (e.g., [Malisiewicz et al., 2011; Deng et al., CVPR 2011, Bergamo and Torresani, CVPR 2012]).\", \"As suggested by the reviewer, we will make sure to report the feature extraction time also in frames per second in order to make this number more easily interpretable. We recognize that, despite the significant speedup enabled by our approach, feature extraction remains more costly than recognition. However, we note that there are many practical scenarios where a feature extraction time of 5 frames per second (as opposed to the 4 frames per *minute* of Action Bank) would enable application of action recognition in large-scale datasets. For example, consider the motivating application of interactive content-based video search where the user may query a system by providing an example sequence in order to find videos containing the same action in the database. In such scenario the search index (containing the features) can be built offline while the training of the action classifier and the recognition itself must be done at query time. Our system can be directly used in such scenarios, in principle even for YouTube-size datasets, while prior mid-level descriptors are simply too costly to be computed for large databases.\", \"We disagree with the final conclusion of the reviewer that this paper 'describes a vision system rather than investigating the learning of representations.' Our entire work centers around the learning of a novel intermediate representation for action recognition. While it is true that it shares similarities with prior high-level descriptors (which we discuss in the paper), it should also be acknowledged that our new representation model introduces significant advantages in terms of computational cost and recognition accuracy over the most closely related prior system.\"]}", "{\"review\": \"We thank the reviewer for the comments and for the useful suggestions to improve the paper. In the final version of our article we will address these issues as follows:\\n- We will clarify further that having more than one example per basis-activity yields improved accuracy, as shown in the experiments.\\n- We will stress that tags and similar metadata information can be exploited to select the negative videos. We already hint to it in the first paragraph of section 3.2.\\n- We will specify the dimensionality of all descriptors in our experiments. In any case these are indeed comparable to the dimensionality of our feature vector: EXMOVES contain 41,172 features, Action Bank has 44,895 entries, while BOWs based on Dense Trajectories have dimensionality equal to 25,000.\\n\\nAs for the large-scale experiments involving Action Bank, for UCF50 we used the accuracy number reported by the authors in their paper. We have estimated that extracting Action Bank features for UCF101 (part 2) would take 132 days by using 10 nodes of our cluster exclusively for this computation. For this reason we are unable to carry out such evaluation. Extraction of Action Bank features from Hollywood-2 would take even longer.\\n\\nAs mentioned in the conclusions, we will release the software implementing our features and all the data used in our experiments.\"}" ] }
M7uvMK0-IZCh9
Image Representation Learning Using Graph Regularized Auto-Encoders
[ "Yiyi Liao", "Yue Wang", "Yong Liu" ]
We consider the problem of image representation for the tasks of unsupervised learning and semi-supervised learning. In those learning tasks, the raw image vectors may not provide enough representation for their intrinsic structures due to their highly dense feature space. To overcome this problem, the raw image vectors should be mapped to a proper representation space which can capture the latent structure of the original data and represent the data explicitly for further learning tasks such as clustering. Inspired by the recent research works on deep neural network and representation learning, in this paper, we introduce the multiple-layer auto-encoder into image representation, we also apply the locally invariant ideal to our image representation with auto-encoders and propose a novel method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact representation which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. Extensive experiments on image clustering show encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-word cases.
[ "image representation", "graph", "problem", "tasks", "raw image vectors", "gae", "unsupervised learning", "learning", "learning tasks", "enough representation" ]
submitted, no decision
https://openreview.net/pdf?id=M7uvMK0-IZCh9
https://openreview.net/forum?id=M7uvMK0-IZCh9
ICLR.cc/2014/conference
2014
{ "note_id": [ "llrd5e-aKxZyi", "kCK_izo04OivH", "732i7lUnhD9_E", "sPZis6owNIs3D", "01eFZdqG9Y0Vo", "1UCG1i6IY41IQ" ], "note_type": [ "review", "review", "review", "review", "comment", "comment" ], "note_created": [ 1390610400000, 1391729520000, 1392819900000, 1391832900000, 1392819780000, 1392819240000 ], "note_signatures": [ [ "anonymous reviewer 41b4" ], [ "anonymous reviewer 0ee0" ], [ "Yolanda Liao" ], [ "anonymous reviewer 6561" ], [ "Yolanda Liao" ], [ "Yolanda Liao" ] ], "structured_content_str": [ "{\"title\": \"review of Image Representation Learning Using Graph Regularized Auto-Encoders\", \"review\": \"Summary of contributions:\\n\\tProposes to regularize auto encoders so that the encoded dataset has a similar nearest neighbor graph structure to the raw pixels. This method is advocated specifically for images.\", \"novelty\": \"moderate (note: the authors seem to believe they are introducing the use of multi-layer auto encoders on images, but they are not)\", \"quality_of_results\": \"low - moderate\", \"quality_of_presentation\": \"low\", \"pros\": \"Demonstrates improvements in clustering and semi-supervised learning performance\", \"cons\": \"Presentation is confusing and in many cases factually incorrect\\n\\tPresentation lacks motivation and reasoning about why the method should work\\n\\tQuantitative results are on small and obscure datasets, and improvements are relative to baselines of unclear value\", \"detailed_comments\": \"\", \"abstract\": \"The abstract is ramble and not very focused. This is a conference on representation learning, we don't need you to explain representation learning and deep nets in the abstract. Focus on how you've changed the auto encoder.\\n\\t\\n\\t'we introduce the multiple- layer auto-encoder into image representation': definitely not true, you even cite papers from 5 years ago that use multi-layer auto encoders on images.\\n\\n\\tThe abstract should say something about what your new method actually is / does and why you think it is a good idea. I can't tell from the abstract what your method is except that you've changed auto-encoders in some way.\\n\\n\\t'Extensive experiments on image clustering show encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-word cases.'\\n\\tBe up-front about what your results are. What does encouraging mean? \\n\\nIntroduction\\n\\tPar 1\\n\\tf(H)? H hasn't been introduced yet. Do you mean f(X)?\\n\\tWhat do you mean by well approximate X? If H is meant to be similar to X, what's the point of switching to it? Do you mean it should preserve info about X?\\n\\n\\tPar 2\\n\\tYou cite a purely supervised method (Krivhevsky et al's ImageNet classifier) and then say 'Those methods normally need to use the auto-encoders to pre-train\\u2026' Not true. \\n\\t'It has been generally accepted as the consensus that the pre-trained network does provide a better representation for the original data.' Definitely not true! See for example Charle's Cadieu's work presented at ICLR last year.\\n\\n\\tPar 3\", \"typo\": \"Locally Linear Embedding is LLE, not LEE\\n\\t\\n\\tPar 4\\n\\tWhat does 'weighted connected' mean?\\n\\tThe wording of this paragraph is not especially clear, but I take it to mean you want f(x_1) to be near f(x_2) if x_2 is a nearest neighbor of x_1. Why do you think this is a desirable property? It's well known that Euclidean distances in images are not very meaningful. That's the whole reason we want to use representation learning on them.\\n\\nSection 3 Graph Regularized Auto-Encoder\\n\\tPar 1\\n\\tThis paragraph seems incredibly dismissive of the body of work that develops our understanding of auto encoders as learning manifolds, and how these manifolds relate to classification problems. I would say previous work such as the manifold tangent classifier definitely explores ideas related to the 'geometrical and discriminating structure of the data'\\n\\n\\tPar 2\\n\\tThis paragraph consists of nothing but the letter 'f'\\n\\nSection 3.1\", \"equations_4_and_5\": \"'sigmoid' should not be in all caps, that makes it looks like the product between variables s, i, g, etc. (This comment applies throughout the paper, not just these equations)\", \"equation_7\": \"when you say V is 'the weight matrix' do you mean it is a weighted adjacency matrix describing which examples should be near each other? Usually in auto encoder literature people use 'the weight matrix' to refer to W_H or W_Q. If V is indeed this adjacency matrix you should describe how it is computed and what the weights mean. Even just putting in a forward reference to section 3.3 can help the reader be less confused.\\n\\nSection 3.2\\n\\n\\tYou really do not need to spend so much space describing the extremely well-known concept of greedy layer wise pre training\\n\\nSection 3.3\\n\\tOK, so V is the graph encoding matrix.\\n\\n\\t3.3.1: Could you please explain the motivation for each of these? i.e., what effect you are hoping their use will have on the learning algorithm?\\n\\nSection 4\\n\\tWhat is ORL? You should cite the publication that introduced it. Is this ORL? http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html It looks like the name has changed.\", \"you_might_want_to_get_the_datasets_you_work_on_added_here\": \"http://rodrigob.github.io/are_we_there_yet/build/\\n\\tThis will make it easier for reviewers to understand your work in context.\\n\\n\\tIn general I do not find these experiments very compelling because they are mostly done on small and obscure datasets. It's also not clear to me which of the baselines you ran yourself and which if any are taken from the literature. Baselines that you ran yourself are less compelling because you may not have the same familiarity with pre-existing methods as the inventors of those methods, and you certainly have less incentive to make them perform well.\\n\\n\\tAt a minimum, I'd like to see some more explanation of why the baselines you're improving upon are impressive. What would be better is to demonstrate good results on datasets that are used more frequently by people in the deep learning community. You are introducing a new kind of auto encoder so you should compare it to pre-existing auto encoders on datasets where auto encoders are frequently used, such as MNIST or Cover Type.\\n\\nSection 4.1\\n\\tI notice you specify the hyper parameters for GAE and GNMF are optimized by grid search, but you have no mention of this for the SAE. Did you also optimize the SAE hyper parameters by grid search?\\n\\n\\tYou say that the GAE has 2 coefficients to be set by grid search, k and lambda, but it seems like there must be a whole lot of other values to set, such as the dimensionality of H. What did you do about these?\\n\\nSection 4.2\", \"footnote_3\": \"why is a single sample per class 'meaningless'? I agree it's really hard to do well in this case, but why is 1 sample totally worthless and 2 acceptable? If you've added more labels, isn't your work no longer comparable to previous work on the same data set?\"}", "{\"title\": \"review of Image Representation Learning Using Graph Regularized Auto-Encoders\", \"review\": \"A new auto-encoder method is proposed. Features are learned by minimizing the sum of the square reconstruction error and a penalty on feature dissimilarity weighted according to a variety of criteria. The basic idea is to weight more strongly the features that are in the neighborhood of the training sample or based on the class labels.\\nA multi-layer version of the autoencoder is proposed by following the layer-wise training protocol.\\nThe method is tested using several metric on a few small datasets.\\n\\nStrengths\\nThe problem is relevant and interesting.\\nThe method is technically sound.\\n\\nWeaknesses\\nThe paper lacks clarity. It does not read well and the language is often vague (what does it mean \\u201crepresent well\\u201d or \\u201cpositive impact\\u201d or \\u201churt numerical optimization\\u201d?).\\nThe paper is also rather incremental. The idea is similar to [6] but also related to methods for dimensionality reduction like deep parametric t-sne (see R. Min, L.J.P. van der Maaten, Z. Yuan, A. Bonner, and Z. Zhang. Deep Supervised t-Distributed Embedding. In Proceedings of the International Conference on Machine Learning (ICML), pages 791-798, 2010 ) and methods like H. Mobahi, R. Collobert, J. Weston. Deep Learning from Temporal Coherence in Video. ICML 2009.\\nIn this light, it would be valuable to add a discussion of the advantages of this method versus [6], for instance. The major difference is that the reconstruction error replaces the \\u201cpull apart\\u201d term in the loss function. What\\u2019s the advantage of having an explicit decoder? Doesn\\u2019t it introduce even more parameters in the model?\\nFinally, the empirical validation could be more convincing if the authors used larger datasets where many other authors already benchmarked (e.g., cifar, mnist, timit, svhn, to mention a few).\"}", "{\"review\": \"Thanks for the comments. We have revised the paper and here are some responses below.\\n-------------------------------------------\\n\\u201cI found the whole paper hard work to follow.\\u201d \\nWe have revised the paper in many places and submit a new version.\\n-------------------------------------------\\n\\u201cResults tables don't give column headings. It wasn't clear what the variable was - the dimension? \\u201d\\nIt would be clearer to break out the 'error measure' into a separate column. \\nEach column is the class we used in the clustering or semi-supervised tasks. It is also equivalent to the dimension of hidden layer or representation. The \\u2018error measure\\u2019 is given in the first column, each row indicates a method in a error measure with respect to different number of classes we use.\\n-------------------------------------------\\n\\u201cThe two metrics used weren't very clear to me \\u2013 \\nthe mapping from unsupervised k-means clusters to class labels \\u201d\\nIt follows the evaluation method in the GNMF paper which is on the dimension reduction too.\", \"http\": \"//www.cad.zju.edu.cn/home/dengcai/Publication/Journal/TPAMI-GNMF.pdf\\n-------------------------------------------\\n\\u201cHow does it compare to a simple 1-NN hinge-loss technique similar to those used by Weston e.g. for training joint embeddings? http://www.thespermwhale.com/jaseweston/papers/wsabie-ijcai.pdf\\u201d\\nWe have read the paper. The technique in this paper gives an idea to preserve the local property. However, it takes much time to implement this algorithm. We will try to use it in our work in the future. In this paper, we mainly want to show whether the local invariant can be applied in deep architecture.\\n-------------------------------------------\\n\\u201cWhat parameters eta, lambda $k$ (for the graph construction) were chosen? How did this $k$ affect the results? \\u201d\\nThe parameters are chosen using a grid based search on the validation set. We give the performance with the best parameter configuration.\\n-------------------------------------------\\n\\u201cHow expensive is the proposed technique compared to the alternative?\\u201d\\nThe time complexity is similar to sparse auto-encoders, as it is a change of constraint. The main problem is space complexity of graph matrix. However, we use the sparse matrix to save the information. Since the graph connection is sparse, there are many zero entries.\"}", "{\"title\": \"review of Image Representation Learning Using Graph Regularized Auto-Encoders\", \"review\": \"An algorithm is presented to generate a representation for unsupervised and partially labelled data with graph-based regularization to keep nearest-neighbors close.\\nEnglish is not clear in some places. \\n\\nI found the whole paper hard work to follow. \\n\\nResults tables don't give column headings. It wasn't clear what the variable was - the dimension? \\nIt would be clearer to break out the 'error measure' into a separate column.\\n\\nThe two metrics used weren't very clear to me -\\nthe mapping from unsupervised k-means clusters to class labels\\n\\nHow does it compare to a simple 1-NN hinge-loss technique similar to those used by Weston e.g. for training joint embeddings?\", \"http\": \"//www.thespermwhale.com/jaseweston/papers/wsabie-ijcai.pdf\\n\\nWhat parameters eta, lambda $k$ (for the graph construction) were chosen? How did this $k$ affect the results?\\n\\nHow expensive is the proposed technique compared to the alternative?\"}", "{\"reply\": \"Thanks for the comments. We have some responses below, and uploaded a new version of paper to clarify some problems. In our new version, we mainly revised Section 1 and Section 2, and we add some experiments at Section 4.1. The experiments we add may give some insights about the comparison to [6] and the meaning of the reconstruction error.\\n-------------------------------------------\\n\\u201c\\u2026related to methods for dimensionality reduction like deep parametric t-sne\\u2026. Deep Supervised t-Distributed Embedding\\u201d\\nFor the similar works \\u2018Deep Learning from Temporal Coherence in Video\\u2019 and \\u2018Deep Supervised t-Distributed Embedding\\u2019 mentioned. The inspiration of pairwise constraint is different. The pairwise constraints in these papers are derived from supervised or human knowledge information while ours, derived from a manifold property, which is more slight and available in an unsupervised work. \\n-------------------------------------------\\n\\u201cIn this light, it would be valuable to add a discussion of the advantages of this method versus [6], for instance. The major difference is that the reconstruction error replaces the \\u201cpull apart\\u201d term in the loss function. What\\u2019s the advantage of having an explicit decoder? Doesn\\u2019t it introduce even more parameters in the model?\\u201d\\nCompared with [6], the convolutional neural network may be a reason, making it can be trained with a single graph loss function. In our case of fully connected neural network, we can\\u2019t minimize the graph regularizer directly, we need layer-wise pre-training. Furthermore, in our experiment, pre-training with only graph regularization cannot work. It may give some insights on the reconstruction error term.\\nBesides, in supervised learning, one can fine-tune the deep architecture with supervised weight matrix since they have training labels. However, in unsupervised learning, samples with different labels may be connected in unsupervised weight matrix. If we fine-tune the deep architecture with unsupervised graph regularization, clustering result might be worse since the wrong information would be fitted better. So we only use pre-training since the reconstruction error and the graph regularization can interact on each other, and we can find the balance through grid based search so that the local invariants are kept and the effect of wrong information is small.\\n-------------------------------------------\\n\\u201cFinally, the empirical validation could be more convincing if the authors used larger datasets\\u201d\\nThe datasets we choose are more frequently used for clustering, which is an evaluation method of dimension reduction techniques. We would like to take experiments on large dataset for supervised learning in future work.\"}", "{\"reply\": \"Thanks for the comments. We have some responses below, and uploaded a new version of paper to clarify some problems. In our new version, we mainly revised Section 1 and Section 2, and we add some experiments at Section 4.1.\\n-------------------------------------------\", \"abstract\": \"\\u201cThe abstract is ramble and not very focused. \\u2026\\u201d\\nWe have revised our abstract.\\n-------------------------------------------\\n\\u201c\\u2018we introduce the multiple- layer auto-encoder into image representation\\u2019: definitely not true\\u201d\\nWe have deleted this claim in our new version, we want to express that we introduce the combination of graph regularizer and deep architecture for image representation.\\n-------------------------------------------\\n\\u201cWhat does encouraging mean?\\u201d\\nCompare to sparse regularization auto-encoder and GNMF, GAE implements the similar or better clustering results. We try to combine the expressive power of deep architecture and the idea of local Euclidean preservation to implement a non-linear dimension reduction algorithm, or at least an option to the existing deep techniques on unsupervised or semi-supervised tasks.\\n-------------------------------------------\\nIntroduction\\nPar 1\\n\\u201cf(H)? H hasn't been introduced yet. Do you mean f(X)? What do you mean by well approximate X?\\u201d\\nIt means f(X), and we mean H should preserve information about X. We have revised in our new version.\\n-------------------------------------------\\nPar 2\\n\\u201cYou cite a purely supervised method\\u2026\\u201d\\nWe have removed this claim in our new version.\\n-------------------------------------------\\nPar 3\\n\\u201cTypo: Locally Linear Embedding is LLE, not LEE\\u201d\\nWe have revised in our new version.\\n-------------------------------------------\\nPar 4\\n\\u201cWhat does \\u2018weighted connected\\u2019 mean? The wording of this paragraph is not especially clear\\u2026\\u201d\\nThis paragraph has been removed in our new version. \\n-------------------------------------------\\nWe rewrite the Par2 and Par4 at the perspective of dimension reduction, and clarify the motivation of our method. And Par3 has been moved to section 2 as related works.\\n-------------------------------------------\\nSection 3 Graph Regularized Auto-Encoder\\nPar 1\\n\\u201cThis paragraph seems incredibly dismissive of the body of work that develops our understanding of auto encoders as learning manifolds, and how these manifolds relate to classification problems. I would say previous work such as the manifold tangent classifier definitely explores ideas related to the \\u2018geometrical and discriminating structure of the data\\u2019\\u201d\\nWe have removed this claim in our new version.\\n-------------------------------------------\\nPar 2\\n\\u201cThis paragraph consists of nothing but the letter 'f'\\u201d\\nThe letter has been removed in our new version.\\n-------------------------------------------\\nSection 3.1\\n\\u201cEquations 4 and 5: 'sigmoid' should not be in all caps\\u2026\\u201d\\nWe have revised all \\u2018sigmoid\\u2019 we used throughout the paper as \\u2018S(x)\\u2019.\\n-------------------------------------------\\n\\u201cEquation 7: when you say V is 'the weight matrix' do you mean\\u2026\\u201d\\nWe have put in a forward reference to section 3.3 in our new version.\\n-------------------------------------------\\nSection 3.2\\n\\u201cYou really do not need to spend so much space describing\\u2026\\u201d\\nWe want to make the definition clear and show that graph regularizer is applied to the training process at each layer.\\n-------------------------------------------\\nSection 4\\n\\u201cWhat is ORL? You should cite\\u2026\\u201d\\nWe have cited the publication in the new version.\\n-------------------------------------------\\n\\u201cIn general I do not find these experiments very compelling because they are mostly done on small and obscure datasets. It's also not clear to me which of the baselines you ran yourself and which if any are taken from the literature. Baselines that you ran yourself are less compelling because you may not have the same familiarity with pre-existing methods as the inventors of those methods, and you certainly have less incentive to make them perform well.\\u201d\\nThe datasets we choose are more frequently used for clustering, which is an evaluation method of dimension reduction techniques. We would like to take experiments on large dataset for supervised learning in future work.\\nFor the baselines, we download the code of GNMF from the author\\u2019s website, and the CNMF is implemented by ourselves which achieves similar performance as the results on their papers. \\n-------------------------------------------\\n\\u201cAt a minimum, I'd like to see some more explanation of why the baselines you're improving upon are impressive. What would be better is to demonstrate good results on datasets that are used more frequently by people in the deep learning community. You are introducing a new kind of auto encoder so you should compare it to pre-existing auto encoders on datasets where auto encoders are frequently used, such as MNIST or Cover Type.\\u201d\\nWe compare our method to sparse auto-encoder, the result tells that the graph regularization can capture the manifold of the input data. We think the similar or better results compared to SAE show the effectiveness of graph regularization as we wanted. Besides, the comparison to GNMF and CNMF proves that expressive power is important when we want to capture the manifold structure of data set. We choose deep network to achieve better performance beyond the existing many interesting linear functions with its nonlinearity. Finally, at the perspective of dimensional reduction, clustering is often used for evaluation, so we take experiments on clustering to evaluate our method (like Deng.Cai et al\\u2019s Graph regularized nonnegative matrix factorization for data representation ).\\n-------------------------------------------\\nSection 4.1\\n\\u201cDid you also optimize the SAE hyper parameters by grid search?\\u201d\\nYes, we have specified this point in our new version.\\n-------------------------------------------\\n\\u201cYou say that the GAE has 2 coefficients to be set by grid search, k and lambda, but it seems like there must be a whole lot of other values to set, such as the dimensionality of H. What did you do about these?\\u201d\\nLike all the method we compare, the dimensionality of H is set to be the number of the classes in the input data set.\\n-------------------------------------------\\nSection 4.2\\n\\u201cFootnote 3: why is a single sample per class \\u2018meaningless\\u2019?... If you've added more labels, isn't your work no longer comparable to previous work on the same data set?\\u201d\\nThe weight between two samples which have the same labels will be denote as 1, since single sample per class means there are no sample have the same labels to others. So it\\u2019s meaningless. We implement the experiment follow the work in CNMF. They use 10% or 20% labeled samples in their experiment. More labels won\\u2019t influence our performance.\"}" ] }
nny0nGJmvYs2b
Zero-Shot Learning and Clustering for Semantic Utterance Classification
[ "Yann N. Dauphin", "Gokhan Tur", "Dilek Hakkani-Tur", "Larry Heck" ]
We propose two novel zero-shot learning methods for semantic utterance classification (SUC) using deep learning. Both approaches rely on learning deep semantic embeddings from a large amount of Query Click Log data obtained from a search engine. Traditional semantic utterance classification systems require large amounts of labelled data, whereas our proposed methods make use of the structure of the task to allow classification without labeled data. We also develop a zero-shot semantic clustering algorithm for extracting discriminative features for supervised semantic utterance classification systems. We demonstrate the effectiveness of the zero-shot semantic learning algorithm on the SUC dataset collected by cite{DCN}. Furthermore, we show that extracting features using zero-shot semantic clustering for a linear SVM reaches state-of-the-art result on that dataset.
[ "semantic utterance classification", "learning", "data", "novel", "learning methods", "suc", "deep learning", "approaches", "deep semantic embeddings", "large amount" ]
submitted, no decision
https://openreview.net/pdf?id=nny0nGJmvYs2b
https://openreview.net/forum?id=nny0nGJmvYs2b
ICLR.cc/2014/conference
2014
{ "note_id": [ "2PRbPFT6fcz9-", "Jvo1fmyuaV-Y3", "bjdXjLV2FZjye", "tV-7Q180huV5c", "bpRoWiBVJ8WUG", "LG-cGAgNIiZnJ", "Y5rm5PFJ-tls1", "6w8d6uJUDW63r" ], "note_type": [ "review", "review", "comment", "comment", "review", "comment", "comment", "review" ], "note_created": [ 1389226500000, 1391630100000, 1392802980000, 1392799020000, 1392531240000, 1392800640000, 1392802800000, 1392016320000 ], "note_signatures": [ [ "David Krueger" ], [ "anonymous reviewer 9b82" ], [ "Yann Dauphin" ], [ "Yann Dauphin" ], [ "anonymous reviewer e900" ], [ "Yann Dauphin" ], [ "Yann Dauphin" ], [ "anonymous reviewer 8b78" ] ], "structured_content_str": [ "{\"review\": \"I thought this was a very intersting and well written paper.\", \"some_comments\": \"I'm ashamed to admit that I didn't what the two methods mentioned were until I read the outro... I thought maybe there would be a fundamentally different approach to both learning AND clustering, or something. Just to be very clear, I would add this to the 1st sentence of the abstract: '...using deep learning: zero-shot learning and zero-shot clustering'. I'd also change the 2nd to last sentence of the intro to 'zero-shot learning and zero-shot clustering'. \\n\\nWhile I agree with your choice to distinguish your approach from 'Traditional SUC systems' that 'rely on a large set of labelled examples', I think you should emphasize your use of Query Click Logs more. I would describe them as analagous with labels in their role in your system (although maybe there is a significant disanalogy I'm overlooking). This would make things clearer, I think. \\n\\nWith this in mind (actually, in any case, if I understood this part properly), I think Section 5, paragraph 3, 2nd sentence should say '...using only [...] X, [...] C [...], and Q = query click logs' or something to that effect. \\n\\nAlso, in Table 1, I believe the first three other ZSL methods you are comparing with do NOT make use of QCLs, and I would emphasize that as well. I would suggest putting a horizontal dividing line between those three and the next two methods.\", \"some_little_edits\": \"6. first paragraph 'However, it is not clear how of a proxy' -> 'However, it is not clear how GOOD of a proxy' (I think that's the word you want?)\", \"next_paragraph\": \"I would rephrase 'We would like these embeddings to cluster...' as 'We would like to cluster these embeddings...' I think this makes it more clear, especially in the context of the next sentence.\", \"after_the_equation_display\": \"'The entropy tell us about the uncertainty we have over the class' -> 'The entropy tell us about the uncertainty we have over the CLASSES' (I think this is what you mean.)\\n\\nlast paragraph, you say that P(C | X) is predicted by a DNN, but isn't it predicted by equation (2)? This uses P(H | X), which IS predicted by a DNN. A minimal edit I think would be acceptable is changing 'predicted by' to 'predicted using'. But you could also be more explicit.\", \"experiments_section_7\": \"I think you should specify how the ZSL (bag of words) model works; what is it's prediction function?\\n\\nIn that same paragraph, I think the description of the Representative URL Heuristic could be clearer. I would make these edits:\\n\\n'We associate each domain with...' -> 'FOR THIS HEURISTIC, we associate each CLASS with...' (I think you mean class, not domain, isn't a domain just a website name?)\", \"next_sentence\": \"'...associated with the website for that semantic class' -> '...associated with that class's website'\\n\\nNext paragraph, 2nd sentence: 'The task is identify...' -> 'The task is TO identify...'\"}", "{\"title\": \"review of Zero-Shot Learning and Clustering for Semantic Utterance Classification\", \"review\": \"The paper considers the task of categorizing queries into classes revealing user intent (e.g., weather vs. flight booking). In doing this, the authors exploit query log data (specifically, queries paired with URLs clicked after sending the query to a search engine), however they use no or little supervised data. Instead, they learn query representations predictive of URLs, and use proximity between a query representation and a category name representation to make the classification decision (word(s) in the category name are also treated as a query). Additionally they consider a regularizer which favors low entropy predictions on unlabelled queries (with a distance-based probability model). Finally, they also use their representation in supervised learning.\\n\\nI find the ideas and their implementation interesting and the experiments seem fairly convincing to me as well (however, see (2)).\", \"comments\": \"1) The type of regularisation used is similar to entropy minimization proposed in Grandvalet and Bengio (2004), and can also be regarded as a form of posterior regularization (Ganchev et al., 2010) or generalized expectation criteria (Druck et al., 2008). I think the authors should briefly discuss the relation.\\n2) The dataset used for testing query classification does not seem to be very standard and actually consists of utterances by users of a spoken dialog system rather than search queries. I am wondering why a more standard dataset is not selected (e.g., KDDCUP).\\n3) Not being an expert in query classification, I would appreciate some discussion of related approaches, as I assume this is not the first method which considers a semi-supervised approach to this task. \\n4) Section 5: 'the best class name would have a meaning P(H | C_r) that is the mean of the meaning of all its utterances E_{X_r | C_r}{ P(H | X_r)} '. I am not sure in what sense it would be the best. If in terms of classification accuracy then this is not entirely correct -- e.g., a single datapoint can move the mean to an arbitrary position and dramatically affect the error rate. I understand what the authors are trying to say here but I think it is a bit vague.\\n5) The authors limit themselves to using only 1,000 most frequent URLs when learning the representations, perhaps because they use the sofmax error function. They might consider using 'standard' techniques which avoid summation over the categories (i.e. URLs) in training: some form of binarization (e.g., Huffman trees) or a ranking error function with sub-sampled negative examples. \\n\\nThe paper would also benefit from some polishing, a couple of points:\\n-- Section 6 (page 5, 1st par): 'However, it is not clear how of a proxy that is for a given task.' ?? \\n-- caption Fig 3: sentence 'ZSL achieves \\u2026' does not seem grammatical\\n\\nOverall, I have a feeling that the authors tend to over-generalize -- it is a nice paper about query classification, and this over-generalization makes it less readable (e.g, see section 6 where 'proxy functions' are discussed).\", \"pro\": \"-- a clever application of distributed representation learning for an important task\\n-- the approach may have applications in other domains (e.g., in opinion summarization -- learning to categorize online review sentences according to product features)\", \"con\": \"-- the dataset may not be entirely appropriate \\n-- discussion of related work does not seem quite adequate \\n-- writing (minor)\"}", "{\"reply\": \"Thanks for your comments. You can see an updated version of the paper at http://arxiv.org/pdf/1401.0509.pdf.\\n\\n1. 'The main proposed method is to learn from query click logs a representation of words into an embedding space. The paper has one huge problem which is its lack of comparison to the most obvious baselines which are other word embedding models.'\\n\\nThe main proposed method is a zero-shot learning framework for utterance classification, not the embeddings. What's more, the learned embeddings are not word-level embeddings, but sentence-level embeddings. We've clarified this in the paper.\\n\\n2. 'Given that the model is very similar to NNLMs like those of Bengio, Collobert and Weston, Huang et al., Mikolov et al., etc it would seem crucial to compare to these existing methods to know whether this model improves over existing literature.'\\n\\nThe embeddings cannot be compared directly to these approaches because they are word-level embeddings. Our zero-shot learning framework requires sentence-level embeddings.\\n\\n3. 'It also seems like, if the click log data includes words like restaurant that are similar to the SUC class names, the task becomes essentially supervised. Even if not, the tasks are related and I would call this approach more of a distant supervision type approach and not zero shot learning since users clicking on semantically important URLs is some type of supervision (albeit one impossible to obtain for anybody except the big search engines). '\\n\\nWe clarified in the paper the link between the embeddings and the zero-shot framework. The embeddings are learned using a supervised task that involves the click logs. That task and its labels are different from the SUC task. The framework we propose is zero-shot because we can obtain a classifier for any SUC task without labelled data for that task using the trained embeddings.\\n\\n4. 'Frome et al. and Socher et al. both had a NIPS paper last year'\\n\\nWe will include these works in Section 7.\\n\\n5. 'The terms/ideas of zero-shot clustering seem confusing. Clustering is, by its usual definition, always unsupervised and hence 'zero-shot'. In fact, the point of zero shot learning is that one can do a usually supervised task but without supervision of the classes that are to be predicted. When I assign an element to a cluster in k-means, am I doing zero-shot clustering in the author's view?'\\n\\nWe've renamed this method 'zero-shot discriminative embedding' for clarity. The paper also better explains the motivation behind the method (see the new Figure 3). By clustering, we had meant that the classes are clustered, not just clustering in the density estimation sense. The goal of the method is to make sure that the embeddings are discriminative for the SUC task, without labelled data of the SUC task. That is a novel algorithm.\\n\\n6. 'The improvement to a simple existing kernel based method in table 3 is tiny (0.2%) improvement.'\\n\\nWe use a linear SVM, for which the state-of-the art feature set gives 6.36%. Our method reduce the error of the linear SVM to 5.75%. The main goal of this experiments is to compare the feature sets which is why we use the linear SVM instead of the kernel SVM.\\n\\n7. '- it would be better if citations mentioned the names of the authors so the reader familiar with the field doesnt have to go back and forth between the text and references to know what paper is being cited. '\\n\\nThanks, our next update will implement this.\"}", "{\"reply\": \"Thanks for your helpful comments. We have updated to the paper to address them (see https://arxiv.org/submit/914938/view). The update clarifies many sections and adds new experiments.\", \"more_detailed_answers_follow\": \"1) 'The type of regularisation used is similar to entropy minimization'\\n\\nWe have added a discussion of this work in Section 7. The key difference with our approach is that entropy minimization is a semi-supervised method. Our approach addresses the problem where no task-specific labels are available.\\n\\n2) 'The dataset used for testing query classification does not seem to be very standard and actually consists of utterances by users of a spoken dialog system rather than search queries.'\\n\\nWe chose this dataset because we are interested in solving the problem of classification for natural language queries. This type of problem arises when a user interact with a system through free-form speech for example.\\n\\n4) 'Section 5: 'the best class name would have a meaning P(H | C_r)'\\n\\nWe have corrected that statement. A good category name is one that is close to utterances of the class.\\n\\n5) 'The authors limit themselves to using only 1,000 most frequent URLs'\\n\\nThis is an improvement that we have considered. It is not clear if that will be very helpful because the URLs follow a Pareto distribution.\\n\\n6) 'The paper would also benefit from some polishing'.\\n\\nWe have carefully edited the paper in the new version.\\n\\n7) 'I have a feeling that the authors tend to over-generalize'.\\n\\nWe have re-written this section to be more specific and clear. In particular, we illustrate the idea with a concrete example (see Figure 3).\"}", "{\"title\": \"review of Zero-Shot Learning and Clustering for Semantic Utterance Classification\", \"review\": \"This paper introduces a method to classify search queries into a set of classes in an utterance frame classification task.\\n\\nThe main proposed method is to learn from query click logs a representation of words into an embedding space.\\n\\nThe paper has one huge problem which is its lack of comparison to the most obvious baselines which are other word embedding models.\\n\\nFor example, the table 2 of the embedding nearest neighbors looks very similar to what neural network language models would learn. In fact, the paper mentions that the proposed model is also very similar to such neural network language models (NNLM).\\n\\nGiven that the model is very similar to NNLMs like those of Bengio, Collobert and Weston, Huang et al., Mikolov et al., etc it would seem crucial to compare to these existing methods to know whether this model improves over existing literature.\\n\\nUnlike NNLMs which are truly unsupervised and can be trained on easily accessible abundant large text corpora like wikipedia, the paper instead proposes to use a proprietary dataset that is not and will not be available to anybody to learn essentially the same type of embeddings.\\n\\nIt also seems like, if the click log data includes words like restaurant that are similar to the SUC class names, the task becomes essentially supervised. Even if not, the tasks are related and I would call this approach more of a distant supervision type approach and not zero shot learning since users clicking on semantically important URLs is some type of supervision (albeit one impossible to obtain for anybody except the big search engines).\\n\\nFrome et al. and Socher et al. both had a NIPS paper last year that also used deep methods for zero shot learning, using embeddings that were learned in an unsupervised way, mention or comparison to these projects would be reasonable.\\n\\nThe terms/ideas of zero-shot clustering seem confusing. Clustering is, by its usual definition, always unsupervised and hence 'zero-shot'. In fact, the point of zero shot learning is that one can do a usually supervised task but without supervision of the classes that are to be predicted.\\nWhen I assign an element to a cluster in k-means, am I doing zero-shot clustering in the author's view?\\n\\nThe improvement to a simple existing kernel based method in table 3 is tiny (0.2%) improvement.\", \"minor_comments\": [\"'models [15] who learn' -> models which learn\", \"Table 1 is in an odd place that is way too early since it's referenced only a few pages later.\", \"it would be better if citations mentioned the names of the authors so the reader familiar with the field doesnt have to go back and forth between the text and references to know what paper is being cited.\", \"citation 25 just mentions ICML and has no title or authors\", \"fig.3 is not readable in a black and white printout\", \"'to the the 1000 most popular'\"], \"conclusion\": \"The merit of the paper is highly questionable without a comparison to similar models that learned word embeddings with just raw text. Several word embeddings are available for download and I encourage the authors to pick one and update their paper on arxiv with an added comparison.\"}", "{\"reply\": \"Thanks for your comments. We've significantly updated the paper to give it more polish and to make the explanations less idiosyncratic (see https://arxiv.org/submit/914938/view). We've also added more experiments. One of which shows a performance boost by using the ZSC embeddings in the zero-shot setting.\", \"detailed_answers_follow\": \"1. 'I had to get back to the ' Zero-Shot Learning with Semantic Output Codes' paper to understand the idea in its full generality'\\n\\nThe paper now contains an overview of zero-shot learning (Section 3). \\n\\n2. 'In particular, it would be useful to formalize the 'intuition' behind equation (2), which corresponds to the 'knowledge base'. '\\n\\nWe've also added Figure 1 to give an intuition behind Equation 2.\\n\\n3. 'It is the most interesting: it proposes an excellent, and to my knowledge, novel idea, that I understood as soon as I saw equation (3). However, the 3 paragraphs of explanation that precede are so confusing that I nearly suspected deliberate obfuscation of a good idea.'\\n\\nWe've rewritten this section to make it more streamlined and clear. In particular, we've added a visualization that illustrates what the method does.\\n\\n4. 'Experiments '\\n\\nWe have clarified our experimental setup in the paper. The raw features are bag-of-words. We used SVMs because they better show the difference between the feature extractors. DNNs could be used also, but we initially wanted to focus on better features extraction. However, we will experiment with more powerful classifiers.\"}", "{\"reply\": \"Thanks, your comments were helpful and we've taken them into account in the updated paper.\"}", "{\"title\": \"review of Zero-Shot Learning and Clustering for Semantic Utterance Classification\", \"review\": \"This paper offers 2 contributions: one confirms that zero-shot learning has some practical use for semantic classification. The second one, about zero-shot clustering is much more original, but unfortunately less mature.\\nWhen using deep learning for sentence or document-level classification, it has been observed that discriminantly tuning the word embedding significantly improved performance.\\nThis paper does such discriminant tuning *without* labeled data, by assuming that the classifier has been obtained through zero-shot learning. They call the method 'zero-shot clustering', and I find it very neat and original.\\n\\nUnfortunately, this paper seems to have been hastily written . Explanations are thought-provoking but very idiosyncratic, thus very hard to follow. Experiments are limited and poorly explained, especially about zero-shot clustering.\", \"section_5\": \"Zero-shot learning is introduced in section 5, but I had to get back to the ' Zero-Shot Learning with Semantic Output Codes' paper to understand the idea in its full generality. In particular, it would be useful to formalize the 'intuition' behind equation (2), which corresponds to the 'knowledge base'.\", \"section_6\": \"\", \"it__is_the_most_interesting\": \"it proposes an excellent, and to my knowledge, novel idea, that I understood as soon as I saw equation (3). However, the 3 paragraphs of explanation that precede are so confusing that I nearly suspected deliberate obfuscation of a good idea. The discussion starts by assuming we want to build density model of the data P(X) like in auto-encoder, and then show this is a bad idea: why bother? What is proposed here has nothing to do with density estimation. This proxy framework is completely cryptic to me: proxy of what? P(C|X)?\\n\\nWhat has this to do with the choice of the entropy (excellent by the way)?\", \"the_link_seems_to_be_in_the_sentence\": \"\\u201cThe better the proxy function hat{f} the better this measure (H(f(X)) - H( hat{f}(X))^2<= K*(f(X) - hat{f}(X))^2 by Lipschitz continuity).\\u201d\\nWhat this sentence tells us is that we should get P(C|X) as close as possible to the true posterior to get its entropy close to the true entropy? But we are doing the opposite here: minimizing the estimated entropy, which does not even have to be close to the true entropy.\", \"section_7\": \"experiments\", \"the_part_about_how_zero_shot_clustering_improves_svm_classification_is_very_frustrating_to_read\": [\"results are so promising, but very few details are shared (table 3).\", \"What are the raw features? N-grams?\", \"Why only SVMs are tried on the DNN and ZSC embeddings? It would make sense to try DNNs or DCNs.\", \"An interesting further experiment would be if discriminant fine tuning of the embedding further improves performance over ZSC. In this case, ZSC training would be comparable to semi-supervised training, with a mixture of labelled and unlabeled examples.\"]}" ] }
tPCrkaLa9Y5ld
One-Shot Adaptation of Supervised Deep Convolutional Models
[ "Trevor Darrell", "Eric Tzeng", "Yangqing Jia", "Judy Hoffman", "Kate Saenko", "Jeff Donahue" ]
Dataset bias remains a significant barrier towards solving real world computer vision tasks. Though deep convolutional networks have proven to be a competitive approach for image classification, a question remains: have these models have solved the dataset bias problem? In general, training or fine-tuning a state-of-the-art deep model on a new domain requires a significant amount of data, which for many applications is simply not available. Transfer of models directly to new domains without adaptation has historically led to poor recognition performance. In this paper, we pose the following question: is a single image dataset, much larger than previously explored for adaptation, comprehensive enough to learn general deep models that may be effectively applied to new image domains? In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be? We show that a generic supervised deep CNN model trained on a large dataset reduces, but does not remove, dataset bias. Furthermore, we propose several methods for adaptation with deep models that are able to operate with little (one example per category) or no labeled domain specific data. Our experiments show that adaptation of deep models on benchmark visual domain adaptation datasets can provide a significant performance boost.
[ "adaptation", "bias", "models", "data", "deep models", "significant barrier towards", "deep convolutional networks", "competitive" ]
submitted, no decision
https://openreview.net/pdf?id=tPCrkaLa9Y5ld
https://openreview.net/forum?id=tPCrkaLa9Y5ld
ICLR.cc/2014/conference
2014
{ "note_id": [ "tsoQWNXLQBtRy", "ttR_C7vVGBtuo", "m5C13ZcMDI3uB", "7E9uK23zu67Xx", "fQYvfJZIv7Qy1", "6dHgn74-FPnyG", "1g5c1HoHMpg8s", "xk0agabF96kdB", "9HKVWnz-B_9kL", "ShZ5-f7a5Gplu", "2-nekWqWeIsc0" ], "note_type": [ "review", "review", "review", "review", "comment", "comment", "review", "review", "comment", "review", "review" ], "note_created": [ 1391755740000, 1391856960000, 1391755740000, 1391755740000, 1392782940000, 1392782880000, 1391624760000, 1392782820000, 1392782940000, 1391755740000, 1391755740000 ], "note_signatures": [ [ "anonymous reviewer ab7e" ], [ "anonymous reviewer 6be7" ], [ "anonymous reviewer ab7e" ], [ "anonymous reviewer ab7e" ], [ "Judy Hoffman" ], [ "Judy Hoffman" ], [ "anonymous reviewer 93c2" ], [ "Judy Hoffman" ], [ "Judy Hoffman" ], [ "anonymous reviewer ab7e" ], [ "anonymous reviewer ab7e" ] ], "structured_content_str": [ "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"The submission tackles the problem of one-shot classifier adaptation between biased datasets that contain overlapping object categories. The authors design a number of experiments to evaluate known transfer/adaptation approaches on deep convnet features taken from the last 3 layers of the Krizhevsky network. The basic approach is that the convnet is trained on LSVRC 1000-category data, then 16 categories are chosen that overlap with categories in 2 other datasets (amazon and webcam). Feature representations are taken from one of the layers of the network using amazon or imagenet data and webcam data, and adaptive classifiers are tested using the amazon or imagenet source and a single webcam image as target.\\n\\nThere is little that is innovative in the submission, since it uses only published or trivial approaches for the convnet and the domain adaptation and the empirical results are not broad enough. Moreover, the work does not contribute to our understanding of learned representations, so I see little relevance for ICLR. \\n\\nThe paper is well-written and offers a number of intuitive explanations for the results, although some of the conclusions don't seem well-justified given the limited evidence from only one target domain. The authors identify an interesting next step of doing learning in the convnet layers using feedback through the adaptation classifier, which could be worthwhile.\"}", "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"This paper studies dataset bias problem. It investigates whether using large source datasets can eliminate dataset bias. The paper shows that such datasets reduce but still do not remove completely dataset bias. It also shows that deep learning features are useful in helping domain adaptation.\\n\\nThis is a largely empirical study of the important issues. The observation made the paper is important and thought-inspiring and worth reporting.\\n\\nIt might be interesting to report all layers (instead of just DeCAF6 and DeCAF7)'s performance on adaptation --- is it always the case that high-level layers are better at adaptation? \\n\\nGFK was used in the paper as an unsupervised domain adaptation method. However, it can be used easily as a semi-supervised or supervised method. For example, once GFK is learnt on unlabeled data, one can learn a classifier by revealing the labels of the target data. The benefit is a better metric is used to measure distances.\"}", "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"The submission tackles the problem of one-shot classifier adaptation between biased datasets that contain overlapping object categories. The authors design a number of experiments to evaluate known transfer/adaptation approaches on deep convnet features taken from the last 3 layers of the Krizhevsky network. The basic approach is that the convnet is trained on LSVRC 1000-category data, then 16 categories are chosen that overlap with categories in 2 other datasets (amazon and webcam). Feature representations are taken from one of the layers of the network using amazon or imagenet data and webcam data, and adaptive classifiers are tested using the amazon or imagenet source and a single webcam image as target.\\n\\nThere is little that is innovative in the submission, since it uses only published or trivial approaches for the convnet and the domain adaptation and the empirical results are not broad enough. Moreover, the work does not contribute to our understanding of learned representations, so I see little relevance for ICLR. \\n\\nThe paper is well-written and offers a number of intuitive explanations for the results, although some of the conclusions don't seem well-justified given the limited evidence from only one target domain. The authors identify an interesting next step of doing learning in the convnet layers using feedback through the adaptation classifier, which could be worthwhile.\"}", "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"The submission tackles the problem of one-shot classifier adaptation between biased datasets that contain overlapping object categories. The authors design a number of experiments to evaluate known transfer/adaptation approaches on deep convnet features taken from the last 3 layers of the Krizhevsky network. The basic approach is that the convnet is trained on LSVRC 1000-category data, then 16 categories are chosen that overlap with categories in 2 other datasets (amazon and webcam). Feature representations are taken from one of the layers of the network using amazon or imagenet data and webcam data, and adaptive classifiers are tested using the amazon or imagenet source and a single webcam image as target.\\n\\nThere is little that is innovative in the submission, since it uses only published or trivial approaches for the convnet and the domain adaptation and the empirical results are not broad enough. Moreover, the work does not contribute to our understanding of learned representations, so I see little relevance for ICLR. \\n\\nThe paper is well-written and offers a number of intuitive explanations for the results, although some of the conclusions don't seem well-justified given the limited evidence from only one target domain. The authors identify an interesting next step of doing learning in the convnet layers using feedback through the adaptation classifier, which could be worthwhile.\"}", "{\"reply\": \"Thank you for your comments and concerns. Please see our comment below which addresses the contributions of our paper.\"}", "{\"reply\": \"Thank you for your detailed and helpful comments. We offer clarifications below and have also made the relevant edits to our paper which should be available on arXiv within a day.\", \"hyper_parameters\": \"Tuning hyper-parameters such as the C in the svm or Daume methods is very tricky when you have so little labeled target data. In particular, for the C-SVM, we did not have any unbiased way to tune the parameter and so left it set as C=1. Since all the methods we report use an SVM classifier that requires setting this C hyperparameter we feel that the relative comparisons between methods is sound even if the absolute numbers could be improved with a new setting for C. For linear interpolation we intended the numbers presented to indicate an optimal performance potential for the approach. We recognize and appreciate that in practice a user would have no way of knowing the optimal parameter choice. Therefore, we provided Figure 1a to provide a deeper understanding of this hyperparameter. Namely, 1) combining the two classifiers does no worse than the minimum of the two approaches and 2) setting the alpha parameter to favor the stronger classifier for the test setting produces the best performance. To avoid confusion we will change the numbers presented in the Tables to represent the average accuracy summed over the alpha parameter. [clarified in section 4.2]\", \"computation_time_comparison\": \"We agree that SVM training time is negligible compared to CNN training time. However, we are interested in comparing the computation time needed for each adaptation method. CNN training is not part of the computation time of the algorithm we are comparing as we treat it as a pretrained feature representation.\", \"test_set_size\": \"We have clarified this in a new version of the paper. The webcam domain has between 15-30 examples per category. For each random train/test split we choose one example for training and 10 other examples for testing (so there is a balanced test set across categories). Therefore, each test split has 160 examples and this procedure is repeated 20 times. [clarified in Section 4.2]\", \"feature_selection\": \"We focused our study of feature selection on the top 3 layers of the network. The lower levels offer worse overall performance and have much higher feature dimensions. In our experiments we observe that adaptation algorithms are beneficial at all of the top 3 layers, with layers 7 and 8 offering the best overall performance. Our intuition is that to achieve the best performance, one should use the highest (most abstract) layer, which is still able to capture the domain specific variations. When using ImageNet the highest layer (layer 8) has this capacity within it\\u2019s 1000 dimensional vector because the entire network was explicitly trained on ImageNet data. We find that for the Amazon domain, layers 7 and 8 perform comparably (with 7 a bit higher). Since layer 8 is 1000way classifier activations it is likely starting to overfit to the ImageNet data and so layer 7 may need to be chosen when transferring from a non-ImageNet source domain. However, we will need to perform experiments with more source domains before this hypothesis can be proven.\", \"minor_remarks\": \"We have fixed the typos mentioned as well as clarifying that the unsupervised domain adaptation methods operate in a transductive setting and so use the test data for subspace learning. We also modified the first sentence of Section 3 to more precisely state the goals of the paper. The framework we introduce is the idea of adding a general adaptation layer that takes as input the activations from of the existing layers from both the source and target domains and outputs a classifier scores as activations. We show that this is a general framework by implementing the approach with a wide variety of adaptation algorithms.\"}", "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"This paper presents an extensive empirical evaluation of various methods and frameworks for domain adaptation, that is, when training (i.e. source) and test (i.e. target) data are expected to be sampled from different - but related - distributions. The study is carried out for the image labeling problem, with the CNN of Krizhevsky et al. as base classifier for the source domain. The main goal is to explore various strategies for applying this network trained for 1,000 categories on ImageNet to other images of the same categories but taken in different conditions (basically taken from a webcam here). The main ideas is to use features from the CNN architecture (different layers are considered) and feed them to various domain adaptation techniques, supervised or not. For the supervised case, the setting is drastic and allows only a single labeled example per category from the target domain.\\n\\n----\\n\\nThe paper tackles an important issue and is sound; the numerous experiments appear to be reliable. It is clearly written despite some typos. But it also raises some questions.\\n\\nIn the beginning of Section 3, it is written 'We propose a generic framework to selectively adapt the parameters of a deep network', which is rather ambitious. The experimental study is extensive and covers many methods but I'm unsure that this actually provides a generic framework for deep learning because (1) this only covers CNNs and (2) it is more the application of existing methods than the definition of a new scheme. The main conclusion of the paper, which is that transferring from ImageNet is more efficient than from Amazon, is fine but does not lead to a framework definition. \\n\\nA very interesting point of the paper is to provide some elements about which features from a CNN should be fed to adaptation methods (which layer basically). I think this is a key problem and I regret that the paper does not elaborate too much on this point. There is some discussion but it seems that the quality of adaption given features from a certain layer can also indicate the abstraction level of the features learned by the CNN. Perhaps, one could elaborate on this.\\n\\nWhen labeled data is unavailable, which is the case here for the target domain, training is complicated (and that's what's addressed in the paper with at most 1 example per category) but model selection is also tricky. For instance, setting the C for the SVMs of the Daume III method or the alpha of the linear interpolation can be complex. This crucial problem is not really tackled in the paper since hyperparameters values are either set through unjustified heuristics or left somewhat undecided. The alpha of the linear interpolation seems to be chosen while looking at the curves of Figure 1 (a), that is, by looking at the evaluation set.. Nothing is said on how the C used for the SVMs of Daume III is chosen and its value is not given. This is problematic since these are the 2 best performing methods.\\n\\nIt is said a couple times that adaption methods based on SVMs (such as Daume III) might suffer from long training duration because of the large number of training examples. I disagree. I suppose that the SVM is using a linear kernel (it should be stated). If this is the case, then it has been shown many times that linear SVMs can be very efficient in training time and memory usage. I suspect the SVM training time to be somewhat negligible compared to that of the CNN.\\n\\nThe test set is rather small, only 160 images, which are split into 20 random splits of 8 images. I wonder if averaging test results obtained on such small sets, which hence never contains all 16 categories, makes sense. I would rather like some statistical significance paired tests for instance.\", \"minor_remarks\": [\"Section 2: typo: reported in 4 -> reported in Section 4\", \"Section 4.2: how many unlabeled examples from the target are used for the unsupervised adaptation approaches, is it 10 per categories as in the test set or 20 as for the source domain train set?\", \"Section 4.3: typo: make use a subspace -> make use of a subspace .\", \"Section 4.6: the discussion on the size of the subspace is useless. It is pretty obvious that a dimension lower than the number of categories is detrimental.\"]}", "{\"review\": \"Thank you to all the reviewers for your comments and suggestions. We would like to reiterate the main contributions of our paper and will respond to specific reviewer comments individually.\", \"our_work_contributes_to_our_understanding_of_learned_representations_in_the_following_ways\": \"1) Our work shows that deep representations learned on large source datasets lessen, but by no means remove the problem of dataset bias; this is important as some people might and have claimed otherwise. Demonstrating performance of deep representations under domain shift is a relevant and novel contribution that paves the way for future work that will be of broad interest to the community. Additionally, we show how the novel combination of existing DA and Deep techniques allows an interesting operating point for adaptation to a new domain (or task) when too few data are available to fine tune in the conventional deep way. We show various adaptation methods are able to improve with as few as one (or none) target labels.\\n\\n2) Our work shows novel experimental results on a standard domain adaptation dataset which has been extensively used in the literature. In particular, we choose to focus on the hardest shift -- amazon->webcam and augment the standard dataset by additionally considering ImageNet as a source domain. We also have practical motivations for considering webcam as a target domain since it is the most similar to a robotic vision domain.\\n\\n 3) Our work shows that some layers of the representation are better for domain adaptation than others, although we do not yet propose an automatic way of selecting the 'optimal' layer.\"}", "{\"reply\": \"Thank you for your comments and suggestions.\", \"feature_selection\": \"We agree that there is great potential in using lower layers in adaptation. However, the trend seems to be that directly extracting representations from lower layers, decreases performance as you go lower. We have recently experimented with using the pooled output of the fifth layer and have found uniformly worse performance than the corresponding DeCAF6 results. Additionally, the feature dimensionality increases dramatically once we dip down into the convolutional layers, making it difficult to apply adaptation techniques with those representations. In the future we plan to mitigate this problem by investigating adaptation techniques for high dimensional and spatially structured features, but we consider that beyond the scope of this work.\\n\\nWe agree with the reviewer that the unsupervised methods we present could be used simply to learn a metric and then supervised data could be used to train a stronger classifier. We feel it is important to demonstrate achievable unsupervised transductive results with this setup. Our experiments using GFK as a supervised method performed better than the unsupervised GFK method, but worse than most of the other adaptation methods that were specifically developed for the supervised setting. To avoid confusion about the GFK method we omitted the supervised results in our paper.\"}", "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"The submission tackles the problem of one-shot classifier adaptation between biased datasets that contain overlapping object categories. The authors design a number of experiments to evaluate known transfer/adaptation approaches on deep convnet features taken from the last 3 layers of the Krizhevsky network. The basic approach is that the convnet is trained on LSVRC 1000-category data, then 16 categories are chosen that overlap with categories in 2 other datasets (amazon and webcam). Feature representations are taken from one of the layers of the network using amazon or imagenet data and webcam data, and adaptive classifiers are tested using the amazon or imagenet source and a single webcam image as target.\\n\\nThere is little that is innovative in the submission, since it uses only published or trivial approaches for the convnet and the domain adaptation and the empirical results are not broad enough. Moreover, the work does not contribute to our understanding of learned representations, so I see little relevance for ICLR. \\n\\nThe paper is well-written and offers a number of intuitive explanations for the results, although some of the conclusions don't seem well-justified given the limited evidence from only one target domain. The authors identify an interesting next step of doing learning in the convnet layers using feedback through the adaptation classifier, which could be worthwhile.\"}", "{\"title\": \"review of One-Shot Adaptation of Supervised Deep Convolutional Models\", \"review\": \"The submission tackles the problem of one-shot classifier adaptation between biased datasets that contain overlapping object categories. The authors design a number of experiments to evaluate known transfer/adaptation approaches on deep convnet features taken from the last 3 layers of the Krizhevsky network. The basic approach is that the convnet is trained on LSVRC 1000-category data, then 16 categories are chosen that overlap with categories in 2 other datasets (amazon and webcam). Feature representations are taken from one of the layers of the network using amazon or imagenet data and webcam data, and adaptive classifiers are tested using the amazon or imagenet source and a single webcam image as target.\\n\\nThere is little that is innovative in the submission, since it uses only published or trivial approaches for the convnet and the domain adaptation and the empirical results are not broad enough. Moreover, the work does not contribute to our understanding of learned representations, so I see little relevance for ICLR. \\n\\nThe paper is well-written and offers a number of intuitive explanations for the results, although some of the conclusions don't seem well-justified given the limited evidence from only one target domain. The authors identify an interesting next step of doing learning in the convnet layers using feedback through the adaptation classifier, which could be worthwhile.\"}" ] }
R5x4IjeY4351N
Why does the unsupervised pretraining encourage moderate-sparseness?
[ "Jun Li", "Wei Luo", "Jian Yang", "Xiaotong Yuan" ]
It is well known that direct training of deep multi-layer neural networks (DNNs) will generally lead to poor results. A major progress in recent years is the invention of various unsupervised pretraining methods to initialize network parameters and it was shown that such methods lead to good prediction performance. However, the reason for the success of the pretraining has not been fully understood, although it was argued that regularization and better optimization play certain roles. This paper provides another explanation for the effectiveness of the pretraining, where we empirically show the pretraining leads to a higher level of sparseness of hidden unit activation in the resulting neural networks, and the higher sparseness is positively correlated to faster training speed and better prediction accuracy. Moreover, we also show that rectified linear units (ReLU) can capture the sparseness benefits of the pretraining. Our implementation of DNNs with ReLU does not require the pretraining, but achieves comparable or better prediction performance than traditional DNNs with pretraining on standard benchmark datasets.
[ "pretraining", "unsupervised pretraining", "dnns", "methods", "relu", "direct training", "deep", "neural networks", "poor results", "major progress" ]
submitted, no decision
https://openreview.net/pdf?id=R5x4IjeY4351N
https://openreview.net/forum?id=R5x4IjeY4351N
ICLR.cc/2014/conference
2014
{ "note_id": [ "O5Q3nYYA3cn-B", "ZTe2rdw5jcTGU", "KnM8tXXIeyngG", "ddASypuiWRb9E", "4CLOCIvoAizst", "99dtL-WuDn5xP" ], "note_type": [ "review", "review", "comment", "comment", "review", "comment" ], "note_created": [ 1391945040000, 1391857620000, 1392743700000, 1392743580000, 1391471580000, 1392743460000 ], "note_signatures": [ [ "anonymous reviewer 090b" ], [ "anonymous reviewer ae2f" ], [ "Jun Li" ], [ "Jun Li" ], [ "anonymous reviewer 230f" ], [ "Jun Li" ] ], "structured_content_str": [ "{\"title\": \"review of Why does the unsupervised pretraining encourage moderate-sparseness?\", \"review\": \"The authors study the effect of unsupervised pre-training on the levels of sparsity that hidden nodes have. The claimed contributions are that unsupervised pre-training encourages what they call \\u201cmoderate-sparseness\\u201d and that their implementation of relus + fully-connected nets achieves state of the art performance on a number benchmarks such as MNIST, CIFAR-10 and JC-NORB.\\n\\nI have found the beginning of Section 3.1 very confusing to read -- it requires some rephrasing (what is 1/R?) especially since it seems relatively crucial in terms of understanding the authors\\u2019 main point. The forward-references at the end of that Section are also confusing the reader. In Section 3.2 -- where does the 0.15 threshold come from? Why is there a jump in Figure 1? What happened at epoch 25?\\n\\nIt would be best if the authors put the best comparable numbers in the tables along with their *citations*, rather than just presenting their own results or other results without references attached.\\n\\nIt is unclear how the authors\\u2019 implementation of relus + fully-connected nets differs from the one initially proposed by Hinton and collaborators (as well as the others cited)? Why are the authors\\u2019 results better?\\n\\nI\\u2019d say the paper is strangely structured/motivated: the claim that ReLUs produce sparser activations is believable and they measure it. The authors also measure that rectified units achieve better test errors (well-known before that). But I don\\u2019t see any evidence that sparsity of rectified units causes better generalization -- rectified units have other properties (such as the absence of saturation) that could be hypothesized to make them generalize better. I would say that the authors should work on finding a mechanism for why sparsity specifically is beneficial in supervised-backprop nets (the fact that unsupervised pre-training *also* leads to sparse representations, according to their metrics, is not a mechanism in and of itself). \\n\\nGenerally the paper has a number of typos, awkward phrasings (e.g., \\u201cthis paper does not intend to obliterate the contribution\\u201d)\"}", "{\"title\": \"review of Why does the unsupervised pretraining encourage moderate-sparseness?\", \"review\": \"Brief summary of paper:\\nThe paper is an experimental evaluation of the sparsity levels of representations in deep networks learned through several approaches. These include 3 types of non-linearity (sigmoid, tanh and ReLU), with and without pretraining, and dropout. The main findings/conclusions are that pretraining yields a moderate level of representation sparsity, and that using their brand of purely supervised ReLU with dropout yields even sparser solutions and better classification on MNIST/CIFAR10/NORB.\", \"assessment\": \"Overall I see very little originality in this paper, the conclusions are not really novel, and the paper is very unclearly written. Contrary to its title it does not bring any insight as to '*WHY* unsupervised pretraining encourages moderate sparseness'. A thorough and clearly presented empirical study of the sparsity properties of representations learned by different algorithms could have been interesting. But unfortunately, in its current state this work is extremely unclearly written, with confusing terminology, vague reasoning, poor structure, and insufficiently described experimental setup. The classification performance obtained with 'their ReLU' appears rather good, but the paper does not investigate in depth what would explain the improvement over the other ReLU results from the literature they report. If their approach/implementation indeed has an advantage, its specificity should be explained and evaluated in detail. It is unclear from the writeup whether hyper-parameters were chosen/tuned using proper methodology.\\n\\nPros & Cons:\\n- lack of originality\\n- very unclear, confusing, writeup\"}", "{\"reply\": \"extbf{Response:} to Anonymous 090b. Please put the responses into the NIPS 2013 latex and generate PDF because there are some tables and equations.\\n\\nThanks for your detailed comments. The typos will be corrected in the coming version and the following are our responses to Anonymous 090b' comments.\\n\\n\\textbf{Question 1:} emph{I have found the beginning of Section 3.1 very confusing to read -- it requires some rephrasing (what is $\\nrac{1}{R}$?) especially since it seems relatively crucial in terms of understanding the authors' main point. The forward-references at the end of that Section are also confusing the reader.}\\n\\n\\textbf{Response 1:} We assume that the number of classes (or object categories) is $R$ in a database. $\\nrac{1}{R}$ is an ideal sparseness measure. The more details and our main point are in the \\textbf{Response 1} of Anonymous ae2f.\\n\\n\\textbf{Question 2:} emph{In Section 3.2 -- where does the 0.15 threshold come from? Why is there a jump in Figure 1? What happened at epoch 25?}\\n\\n\\textbf{Response 2:} To facilitate the AOM and AIM, we firstly calculate the mean center features of every class. Then we use the mean center features to calculate the AOM and AIM. The hyperparameter $\\tau$ is used to distinguish the activation unit of the mean center feature. The $\\tau$ are set to $0.05$ as we consider a unit of the mean center feature non-activated if the absolute value of its activation is below $0.05 $. The different hyperparameters $\\tau$ ($\\tau=0.01$ or $\\tau=0.05$) could lead to similar result that compared to DNNs with sigmoid activation function, the features have low AOM and high AIM. The more details are in the \\textbf{Response 2} of Anonymous 230f. \\n\\nThe reason of the jump in Figure 1 is the different momentums. Generally, the momentum is 0.5 in the first 25 epochs and is 0.9 in the rest 25 epochs.\\n\\n\\textbf{Question 3:} emph{It is unclear how the authors' implementation of relus + fully-connected nets differs from the one initially proposed by Hinton and collaborators (as well as the others cited)? Why are the authors' results better?}\\n\\n\\textbf{Response 3:} In really, we also want to know the reason. There are some different details: 1) On MNIST we choose masking noise as the corruption process: each pixel has a probability of 0.2 of being artificially set to 0 and we do not choose masking noise on CIFAR10 and JC-NORB. But on all database Glorot et al. (2009) choose a a probability of 0.25; 2) We do use an $L2$ penalty on the activations with a coefficient of 0.00001, but Glorot et al. (2009) uses an $L1$ penalty on the activations with a coefficient of 0.001. 3) We do mix up the momentum with the learning rate.\\nWe will report our codes of DNNs with ReLU and you can see the details that are in the \\textbf{Response 2} of Anonymous ae2f. \\n\\n\\n\\textbf{Question 4:} emph{I'd say the paper is strangely structured/motivated: the claim that ReLUs produce sparser activations is believable and they measure it. The authors also measure that rectified units achieve better test errors (well-known before that).\\nBut I don't see any evidence that sparsity of rectified units causes better generalization -- rectified units have other properties (such as the absence of saturation) that could be hypothesized to make them generalize better.}\\n\\n\\textbf{Response 4:} The rectified linear units (ReLU) and pretraining give us the sparseness benefits. Based on experiments we conjecture that to some extent the ReLU can capture the sparseness benefits of the pretraining. Later, we will visualize the filters of DNNs with ReLU. If ReLU and pretraining have some similar filters, then the ReLU can capture the sparseness benefits of the pretraining.\\n\\n\\textbf{Question 5:} emph{I would say that the authors should work on finding a mechanism for why sparsity specifically is beneficial in supervised-backprop nets (the fact that unsupervised pre-training *also* leads to sparse representations, according to their metrics, is not a mechanism in and of itself). Generally the paper has a number of typos, awkward phrasings (e.g., 'this paper does not intend to obliterate the contribution')}\\n\\n\\textbf{Response 5:} The mechanism is based on the feature learning. Beyond the feature learning, the activation units of features are sparse (You can see the details that are in the \\textbf{Response 1} of Anonymous ae2f). The typos and awkward phrasings will be corrected in the coming version.\"}", "{\"reply\": \"extbf{Response} to Anonymous ae2f. Please put the responses into the NIPS 2013 latex and generate PDF because there are some tables and equations.\\n\\nThanks for your detailed comments. The typos will be corrected in the coming version and the following are our responses to Anonymous ae2f' comments.\\n\\n\\textbf{Question 1:} emph{Overall I see very little originality in this paper, the conclusions are not really novel, and the paper is very unclearly written. Contrary to its title it does not bring any insight as to '*WHY* unsupervised pretraining encourages moderate sparseness'. A thorough and clearly presented empirical study of the sparsity properties of representations learned by different algorithms could have been interesting. But unfortunately, in its current state this work is extremely unclearly written, with confusing terminology, vague reasoning, poor structure, and insufficiently described experimental setup.}\\n\\n\\n\\textbf{Response 1:} The reason is as following:\\n\\nIn DNNs with the unsupervised pretraining, the promoted procedure of the data can be described at many levels (Lee2009,Bengio2012): raw-level features (data samples), lower-level features (edges and object-parts) and higher-level features (objects). Correspondingly, the first, second, and third hidden layers of DNNs learn edge detectors, object parts, and objects respectively. Those detectors (filters) are visualized in Fig. 1 of citep{Erhan2010} (or Fig. 3 of citep{Lee2009}). This shows that the higher layer learned to combine the lower layer's feature into more complex, higher-level feature.\\n\\nWe assume that the number of classes (or object categories) is $R$ in a database, such as MNIST ($R=10$). Suppose that in DNNs the number of hidden units of first (lower) layer is $M$ and the number of hidden units of second (higher) layer is $N$. Correspondingly, the number of detectors of first and second layers is $M$ and $N$, respectively. The unsupervised pretraining averagely trains the detectors because every class has the similar number of simples. Ideally, the number of higher-level detectors of every class is a little more than $\\nrac{N}{R}$ since higher-level detectors of different objects share a few common lower-level feature detectors. The number of lower-level detectors of every class is above $\\nrac{M}{R}$ since lower-level detectors of different objects share a few common raw-level feature detectors.\\n\\nBeyond the feature learning, we study the activations of units of features. A simple belonged to a class transfers from the lower layer to the higher layer. So, detectors belonged to the class are detected in higher layer. The number of higher-level detectors belonged to the class is a little more than $\\nrac{N}{R}$. In other word, the units that correspond to the detectors are activated in higher layer. The number of activation units of higher-level feature is a little more than $\\nrac{N}{R}$. Thus, we hope that the ideal sparseness measure is a little more than the ratio between \\nthe number of activation units of higher layer and the number of units of higher layer. The ratio is $\\nrac{\\nrac{N}{R}}{N} = \\nrac{1}{R}$.\\n\\nWe do a lot of experiments (you can see \\textbf{Responses 1 and 2} of Anonymous 230f) and use Hoyer's sparseness measure (Hoyer 2004) and Rath's sparseness measure (Rath 2008) to calculate the number of low activation (or zero) units in a feature. This shows that unsupervised pretraining encourages sparseness.\\n\\n\\textbf{Question 2:} emph{The classification performance obtained with 'their ReLU' appears rather good, but the paper does not investigate in depth what would explain the improvement over the other ReLU results from the literature they report. If their approach/implementation indeed has an advantage, its specificity should be explained and evaluated in detail. It is unclear from the writeup whether hyper-parameters were chosen/tuned using proper methodology.}\\n\\n\\textbf{Response 2:} The rectified linear units (ReLU) and pretraining give us the sparseness benefits. Based on experiments we conjecture that to some extent, the rectified linear units (ReLU) can capture the sparseness benefits of the pretraining. Later, we will visualize the filters of DNNs with ReLU. If ReLU) and pretraining ReLU and pretraining have some similar filters, then the ReLU can capture the sparseness benefits of the pretraining.\", \"we_will_report_our_codes_of_dnns_with_relu_and_the_details_are_as_following\": \"Consider an $L$-layer network.\\nLet us denote by $a^ell$ the output vector of the $ell$th layer, starting with $a^1$ (the input), and finishing with a linear combination of the variables $a^{L-1}$.\\nIn a fully connected neural network, we can recursively define for $ell=2$ to $L-1$:\\n\\begin{align}\\na_{j}^{ell}=f_{j}^{ell}(z_{j}^{ell})qquad qquad z_{j}^{ell}=sum_{i=1} w_{ji}^{ell-1}a_{i}^{ell-1}+b_{j}^{ell-1} ,\\nlabel{eq:nn}\\nend{align}\\nwhere $w_{ji}^{ell}$ denotes the weights from the $i$-th unit of the $ell$-th layer to the $j$-th unit of the $ell+1$th layer, $b_j^{ell}$ denotes the bias term for the $j$-th unit of the $ell+1$-th layer, and $f_{j}^{ell}(cdot)$ is an activation function (for example, sigmoid or hyperbolic tangent).\\nFor traditional neural networks, the model parameters are ${w_{ji}^{ell},b_j^ell}$, and the activation functions are fixed.\\nIn this paper, we the rectified linear units (ReLU).\\n\\nThe objective of learning is to find the optimal network parameters so that the network output $a^L$ matches the target closely. The output $a^{ell}$ can be compared to a target vector $t$ through a loss function $psi(a^{ell},t)$. There are two common loss functions in the neural network literature.\\nThe first is squared loss $psi(a^{ell},t)=|a^{ell}-t|^2=sum_j(a_j^{ell}-t_j)^2$, where $a_j^L=z_j^L$. The second loss function is the negative log-likelihood loss $psi(a^{ell},t)=-logpsi_t(a^L)=-sum_jlog(a_j^L)$, where $a_j^L=\\nrac{e^{z_j^L}}{sum_ie^{z_i^L}}$. Given the loss function, the goal of neural network learning is to minimize the object function\\n\\begin{align}\\nE(phi)=sum_{n=1}^NE_n(phi) , qquad E_n(phi)= psi(a_n^L(x_n,phi),t_n) \\u951b?\\nlabel{eq:opt}\\nend{align}\\nwhere $phi$ denote the set of all model parameters ${w_{ji}^ell, b_j^{ell}}$.\\n\\nIn order to train the model parameters, we employ the standard {em back-propagation} algorithm that can be divided into three phases (cite{Bishop2006}): forward propagation, error back-propagation, and parameter update.\\nGiven a training set of $N$ input sample ${x_n}$, where $n =1,cdots,N$, together with a set of target vectors ${t_n}$, the three phases can be described as follows, and the resulting algorithm is in Figure~\\nef{fig:alg}.\\n\\n\\textbf{Forward propagation}: We construct the forward propagation of information through the network: that is, we compute the hidden unit outputs ${z_j^ell;a_j^ell}$ using\\neqref{eq:nn}.\\n\\n\\textbf{Error back-propagation}:\\nThe back-propagation phase computes the gradient $partial E_n(phi)/partial phi$.\\nThe gradients with respect to the weights ${w_{ji}^ell}$ and biases ${b_j^ell}$ can be described as follows.\\nStarting at the output node $delta_j^L=\\nho_j^L=2(a_{j}^{ell}(x_n)-t_{nj})$ (least squares loss) or $delta_j^L=a_{j}^{ell}(x_n)-1_{t_n=j}, \\nho_j^L=\\nrac{1}{a_{j}^{ell}(x_n)}$ (negative log-likelihood loss); we compute the partial derivatives from $L-1$ to $2$ using the following chain-rule formulae:\\n\\begin{align}\\n\\nrac{partial E_n}{partial w_{ji}^{ell-1}}=delta_j^{ell}z_i^{ell-1} qquad \\nrac{partial E_n}{partial b_{j}^{ell-1}}=delta_j^{ell} \\nonumber\\\\\\ndelta_j^{ell}=\\nho_j^{ell}\\nrac{partial f_{j}^{ell}}{partial z_{j}^{ell}} qquad \\nho_j^{ell}=sum_{h} delta_h^{l+1}w_{hj}^{ell}\\nlabel{eq:grad}\\nend{align}\\nwhere $\\nrac{partial f_{j}^{ell}}{partial z_{j}^{ell}}$ easily calculates.\\n\\n\\textbf{Parameters update}: We employ (mini-batch) stochastic gradient descent (SGD) with both $L_2$ regularization for the weights $w_{ji}^ell$ and the momentum update rule (cite{Jacobs1988}). The update for all parameters $phi$ based on one data point at a time $t$ is\\n\\begin{align}\\nphi^{(t+1)}&=phi^{(t)} +\\trianglephi^{(t)} \\nonumber\\\\\\n \\trianglephi^{(t)}& =omega \\trianglephi^{(t-1)}-eta(1-omega)(\\nabla E_n(phi^{(t)})+gamma phi^{(t)}) ,\\nlabel{eq:param}\\nend{align}\\nwhere the parameter $eta$ is known as the learning rate, $omega$ is the momentum parameter and $gamma$ is the $L_2$ regularization parameter.\\nFor mini-batch, the gradient is replaced by the average gradient over the mini-batch.\\n\\n\\begin{figure}\\ncaption{Back-Propagation}\\nlabel{fig:alg}\\n\\begin{tabular}{l}\\n hline\\n\\textbf{initialize} all parameters (weights and biases) of the DNNs with $L$ layers\\\\\\n\\textbf{do}\\\\\\nhspace{0.5cm}\\textbf{for} each example $x_n$ in the training set\\\\\\nhspace{0.9cm} $a_n^L$ = DNNs-output(DNNs, $x_n$) using eqref{eq:nn}. $\\backslash *$ forward pass\\\\\\nhspace{0.9cm} $t_n$ = target output for $x_n$ \\\\\\nhspace{0.9cm} compute error $psi(a_n^L,t_n)$ using eqref{eq:opt} \\\\\\nhspace{0.9cm} compute gradients of all parameters in DNNs using eqref{eq:grad}. $\\backslash*$ backward pass\\\\\\nhspace{0.9cm} update all parameters in the DNNs using eqref{eq:param}\\\\\\nhspace{0cm} {\\bf until} a stopping criterion is satisfied\\\\\\n\\textbf{return} the network\\n\\\\ hline \\\\\\nend{tabular}\\nend{figure}\"}", "{\"title\": \"review of Why does the unsupervised pretraining encourage moderate-sparseness?\", \"review\": \"Review Summary\\nThis work empirically show the unsupervised pretraining encourage moderate-sparseness in a deep neural network, which could lead better classification performance. The author also proposed that ReLU DNNs without pretraining could also lead to moderate-sparseness. \\nThough empirically studying sparseness in DNNs is interesting and important, this submission could potentially be improved. \\n\\nPros\\n-- well-written and organized\\n-- an interesting idea for exploring the role of unsupervised pretraining in deep learning\\nCons\\n-- I am not convinced that the unsupervised pretraining itself indeed encourage sparseness in higher layers of networks. As mentioned in your paper, those pretraining algorithms try to decrease the reconstruction errors under some constrains. If those models were trained longer enough, the resulting networks could try their best to exploit their capacity to reconstruct data (in other word, the networks could become less and less sparse). \\n-- the author did not explain how the hyperparameters (of SPM, AIM and AOM) were chosen. The different hyperparameters could lead to very different conclusions. \\n-- the sparseness measurement used in this paper is improper. The activations of sigmoid and tanh are unlike ReLU, at a certain scale. You could consider Hoyer\\u2019s sparseness measure (Hoyer 2004) which is on a normalized scale and avoids the hyperparameter epsilon in the SPM.\\n-- AOM should also consider the activation overlapping between samples from any number of classes'. \\n\\nMinor comments\\nParagraph 4, page 2: The results of your DNNs are not state of the art for those data set.\", \"section_3\": \"I cannot understand assumption 2.\\nIn equation 2, what $x_{ij}$ represent?\\nIn equation 3, m?\"}", "{\"reply\": \"extbf{Response} to Anonymous 230f. Please put the responses into the NIPS 2013 latex and generate PDF because there are some tables and equations.\\n\\nThanks for your detailed comments. The typos will be corrected in the coming version and the following are our responses to Anonymous 230f' comments.\\n\\n\\textbf{Question 1:} emph{-- I am not convinced that the unsupervised pretraining itself indeed encourage sparseness in higher layers of networks. As mentioned in your paper, those pretraining algorithms try to decrease the reconstruction errors under some constrains. If those models were trained longer enough, the resulting networks could try their best to exploit their capacity to reconstruct data (in other word, the networks could become less and less sparse). -- the sparseness measurement used in this paper is improper. The activations of sigmoid and tanh are unlike ReLU, at a certain scale. You could consider Hoyer's sparseness measure (Hoyer 2004) which is on a normalized scale and avoids the hyperparameter $epsilon$ in the SPM.}\\n\\n\\textbf{Response 1:}\\n\\n1). If we could consider Hoyer's sparseness measure (HSPM), compared to DNNs with sigmoid activation function (Dsigmoid) the unsupervised pretraining also encourages sparseness. Table~\\nef{tab:hspm552} shows that HSPM of DBNs, RBMs and DNNs with ReLU (DReLU) is better than one of Dsigmoid.\\n\\n2). When the pretraining models were trained longer enough, HSPM of the RBMs could have an upper bound. Table~\\nef{tab:hspmrbm1} shows that when training epochs change from 100 to 1000, the upper bounds of HSPM of RBMs with 500 and 1000 hidden units are 0.451 and 0.528 respectively.\\n\\n3). When the number of hidden unites increases, the networks will become more sparse and also have an upper bound. Table~\\nef{tab:hspmrbm2} shows that when the number of hidden units changes from 500 to 10000, an upper bound of HSPM is 0.719 after 1000 training epochs.\\n\\n4). The unsupervised pretraining itself indeed encourage sparseness that HSPM of the networks has an upper bound in higher layers of networks. Table~\\nef{tab:deephspm} shows that HSPM of DNNs with five hidden (500) layers changes from 0.45 to 0.582 after 1000 training epochs and HSPM of DNNs with five hidden (1000) layers changes from 0.528 to 0.665.\\n\\n\\textbf{Question 2:} emph{-- the author did not explain how the hyperparameters (of SPM, AIM and AOM) were chosen. The different hyperparameters could lead to very different conclusions. -- AOM should also consider the activation overlapping between samples from any number of classes'.}\\n\\n\\textbf{Response 2:} The sparse parameters $epsilon$ are set to $0.05$ as we consider a unit activated if the absolute value of its activation is below $0.05 $. Compared to DNNs with sigmoid activation function, the different hyperparameters $epsilon$ also lead to sparseness.\\n\\nTo facilitate the AOM and AIM, we firstly calculate the mean center features of every class. Then we use the mean center features to calculate the AOM and AIM. The hyperparameter $\\tau$ is used to distinguish the activation unit of the mean center feature. The $\\tau$ are set to $0.05$ as we consider a unit of the mean center feature activated if the absolute value of its activation is below $0.05$.\\n\\nThe different hyperparameters $\\tau$ could lead to similar conclusions. For classification tasks, it is always desirable to extract features that have low AOM and high AIM because the features are most effective for preserving class separability. Compared to DNNs with sigmoid activation function, we have following result that the features have low AOM and high AIM under $\\tau=0.01$ (in Tables~\\nef{tab:aom1} and ~\\nef{tab:aim1}) and $\\tau=0.05$ (in Tables~\\nef{tab:aom5} and ~\\nef{tab:aim5}). Moreover, the AOM of higher-level features is lower than one of lower-level features in Tables~\\nef{tab:aom1} and ~\\nef{tab:aom5}.\\n\\n\\textbf{Question 3:} emph{Paragraph 4, page 2: The results of your DNNs are not state of the art for those data set.}\\n\\n\\textbf{Response 3:} Yes, the results of your DNNs are not state of the art for those data set as other papers use the convolution and dropout. We only do train DNNs with the standard back-propagation algorithm from randomly initialized parameters and the results are state of the art for those data set.\\n\\n\\textbf{Question 4:} emph{Section 3: I cannot understand assumption 2.}\\n\\n\\textbf{Response 4:} Since simples of different objects have a certain amount of common raw-level features, lower-level features of different objects share the common raw-level features and also have a certain amount of common lower-level features. Similarly, higher-level features share the common lower-level features.\\n\\n\\textbf{Question 5:}emph{In equation 2, what $x_{ij}$ represent?\\nIn equation 3, $m$?}\\n\\n\\textbf{Response 5:} Sorry, $x_{ij}$ should be replaced $h_{ij}$ that is the $j$th unit of the mean center features of $i$th class.\\n$m$ should be replaced $n$ that is the number of units of the mean center features.\\n\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{Hoyer's sparseness measure (HSPM) of DNNs (500-500-2000) on MNIST. The momentum is 0.5 in the first 25 epochs and is 0.9 in the rest 25 epochs.}\\nlabel{tab:hspm552}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c}\\nhlinehline\\nModels & data & 1st & 2nd & 3rd & error\\\\\\nhlinehline\\nDReLU & 0.634 & 0.584 & 0.492 & 0.482 & 1.17$%$\\\\\\nDBNs & 0.634 & 0.388 & 0.525 & 0.634 & 1.17$%$\\\\\\nRBMs & 0.634 & 0.398 & 0.576 & 0.665 & 1.80$%$\\\\\\nDsigmoid & 0.634 & 0.109 & 0.023 & 0.077 & 2.01$%$\\\\\\nhlinehline\\nend{tabular}\\nend{table}\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{Hoyer's sparseness measure (HSPM) of RBMs with different training epochs. The momentum is 0.5 in the first 100 epochs and 0.9 in the rest 900 epochs.}\\nlabel{tab:hspmrbm1}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c}\\nhlinehline\\nTraining epochs & 100 & 200 & 500 & 800 & 1000\\\\\\nhlinehline\\n784-500 & 0.261 & 0.425 & 0.441 & 0.448 & 0.451\\\\\\n784-1000 & 0.327 & 0.508 & 0.525 & 0.527 & 0.528\\\\\\nhlinehline\\nend{tabular}\\nend{table}\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{Hoyer's sparseness measure (HSPM) of RBMs with different number of hidden unites after training for 1000 epochs. The momentum is 0.5 in the first 100 epochs and 0.9 in the rest 900 epochs.}\\nlabel{tab:hspmrbm2}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c}\\nhlinehline\\nThe number & 500 & 1000 & 2000 & 5000 & 10000\\\\\\nhlinehline\\n & 0.451 & 0.528 & 0.598 & 0.676 & 0.719\\\\\\nhlinehline\\nend{tabular}\\nend{table}\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{Hoyer's sparseness measure (HSPM) of DNNs (500-500-500-500-500 and 1000-1000- 1000-1000-1000) on MNIST. The momentum is 0.5 in the first 100 epochs and 0.9 in the rest 900 epochs.}\\nlabel{tab:deephspm}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c}\\nhlinehline\\nLayers & 1st & 2nd & 3rd & 4th & 5th\\\\\\nhlinehline\\n500-500-500-500-500 & 0.45 & 0.582 & 0.509 & 0.560 & 0.520\\\\\\n1000-1000-1000-1000-1000 & 0.528 & 0.665 & 0.589 & 0.654 & 0.606\\\\\\nhlinehline\\nend{tabular}\\nend{table}\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{AOM of DNNs (500-500-2000) under different number of activation overlapping classes on MNIST. $\\tau$ is set to $0.01$.}\\nlabel{tab:aom1}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c||c||c||c||c||c}\\nhlinehline\\n The number & & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\\\nhlinehline \\n & data & 0.409 & 0.368 & 0.338 & 0.314 & 0.292 & 0.274 & 0.256 & 0.240 & 0.226 \\\\\\nhlinehline \\nDsigmoid & 1st & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\\\\\n & 2nd & 0.994 & 0.993 & 0.992 & 0.991 & 0.991 & 0.990 & 0.989 & 0.989 & 0.988\\\\\\n & 3rd & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\\\\\nhlinehline \\nRBMs & 1st & 0.978 & 0.968 & 0.959 & 0.950 & 0.941 & 0.933 & 0.925 & 0.917 & 0.910\\\\\\n & 2nd & 0.686 & 0.602 & 0.537 & 0.484 & 0.439 & 0.400 & 0.366 & 0.337 & 0.312\\\\\\n & 3rd & 0.815 & 0.734 & 0.658 & 0.586 & 0.518 & 0.452 & 0.388 & 0.326 & 0.267\\\\\\nhlinehline \\nend{tabular}\\nend{table}\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{AIM of DNNs (500-500-2000) under different number of activation overlapping classes on MNIST. $\\tau$ is set to $0.01$.}\\nlabel{tab:aim1}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c||c||c||c||c||c}\\nhlinehline\\n the number & & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\\\nhlinehline \\n & data & 0.069 & 0.032 & 0.018 & 0.010 & 0.005 & 0.002 & 0.001 & 0.000 & 0.000 \\\\\\nhlinehline \\nDsigmoid & 1st & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\\\\\n & 2nd & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\n & 3rd & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\nhlinehline RBMs & 1st & 0.006 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\n & 2nd & 0.101 & 0.033 & 0.014 & 0.007 & 0.003 & 0.001 & 0.001 & 0.000 & 0.000\\\\\\n & 3rd & 0.033 & 0.005 & 0.002 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\nhlinehline\\nend{tabular}\\nend{table}\\n\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{AOM of DNNs (500-500-2000) under different number of activation overlapping classes on MNIST. $\\tau$ is set to $0.05$.}\\nlabel{tab:aom5}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c||c||c||c||c||c}\\nhlinehline\\nthe number & & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\\\nhlinehline & data & 0.304 & 0.260 & 0.227 & 0.201 & 0.178 & 0.158 & 0.141 & 0.125 & 0.110\\\\\\nhlinehline Dsigmoid & 1st & 0.987 & 0.981 & 0.975 & 0.970 & 0.965 & 0.960 & 0.955 & 0.950 & 0.946\\\\\\n & 2nd & 0.961 & 0.951 & 0.942 & 0.934 & 0.927 & 0.920 & 0.914 & 0.909 & 0.904\\\\\\n & 3rd & 0.994 & 0.993 & 0.992 & 0.991 & 0.991 & 0.990 & 0.990 & 0.989 & 0.989\\\\\\nhlinehline \\nRBMs & 1st & 0.863 & 0.812 & 0.770 & 0.734 & 0.704 & 0.678 & 0.655 & 0.636 & 0.618\\\\\\n & 2nd & 0.421 & 0.329 & 0.271 & 0.231 & 0.202 & 0.179 & 0.161 & 0.145 & 0.132\\\\\\n & 3rd & 0.164 & 0.096 & 0.059 & 0.038 & 0.026 & 0.019 & 0.014 & 0.011 & 0.009\\\\\\nhlinehline \\nend{tabular}\\nend{table}\\n\\n\\begin{table}[!t]\\n\\nenewcommand{arraystretch}{1.3}\\ncaption{AIM of DNNs (500-500-2000) under different number of activation overlapping classes on MNIST. $\\tau$ is set to $0.05$.}\\nlabel{tab:aim5}\\ncentering\\n\\begin{tabular}{c||c||c||c||c||c||c||c||c||c||c}\\nhlinehline\\nthe number & & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\\\nhlinehline \\n & data & 0.077 & 0.038 & 0.024 & 0.017 & 0.013 & 0.010 & 0.007 & 0.006 & 0.005 \\\\\\nhlinehline \\nDsigmoid & 1st & 0.004 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\n & 2nd & 0.008 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\n & 3rd & 0.002 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\nhlinehline \\nRBMs & 1st & 0.055 & 0.010 & 0.002 & 0.001 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\\\\\n & 2nd & 0.168 & 0.082 & 0.046 & 0.029 & 0.019 & 0.013 & 0.009 & 0.007 & 0.006\\\\\\n & 3rd & 0.127 & 0.062 & 0.033 & 0.020 & 0.013 & 0.009 & 0.006 & 0.003 & 0.000\\\\\\nhlinehline\\nend{tabular}\\nend{table}\"}" ] }
ZZ7T6hXbaEcAQ
An empirical analysis of dropout in piecewise linear networks
[ "David Warde-Farley", "Ian Goodfellow", "Aaron Courville", "Yoshua Bengio" ]
The recently introduced dropout training criterion for neural networks has been the subject of much attention due to its simplicity and remarkable effectiveness as a regularizer, as well as its interpretation as a training procedure for an exponentially large ensemble of networks that share parameters. In this work we empirically investigate several questions related to the efficacy of dropout, specifically as it concerns networks employing the popular rectified linear activation function. We investigate the quality of the test time weight-scaling inference procedure by evaluating the geometric average exactly in small models, as well as compare the performance of the geometric mean to the arithmetic mean more commonly employed by ensemble techniques. We explore the effect of tied weights on the ensemble interpretation by training ensembles of masked networks without tied weights. Finally, we investigate an alternative criterion based on a biased estimator of the maximum likelihood ensemble gradient.
[ "dropout", "empirical analysis", "networks", "piecewise linear networks", "piecewise linear", "criterion", "neural networks", "subject", "much attention due", "simplicity" ]
submitted, no decision
https://openreview.net/pdf?id=ZZ7T6hXbaEcAQ
https://openreview.net/forum?id=ZZ7T6hXbaEcAQ
ICLR.cc/2014/conference
2014
{ "note_id": [ "Kz2rIbvO-Uzih", "K2d4DqsAXe2CJ", "SIn-NMWgu4I90", "jLaDjXnpvfjwh", "GfqAwRlBAJfi7", "nn6Na4T5NHmCy", "Lkv0jtOBIqkz-" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1395643500000, 1392817500000, 1394206680000, 1391804460000, 1391401860000, 1391804340000, 1391474340000 ], "note_signatures": [ [ "Phil Bachman" ], [ "David Warde-Farley" ], [ "David Warde-Farley" ], [ "anonymous reviewer eb20" ], [ "anonymous reviewer 3e6b" ], [ "anonymous reviewer eb20" ], [ "anonymous reviewer 925a" ] ], "structured_content_str": [ "{\"review\": \"For the experiments presented in Fig. 3, I think you guys should just train the tied-weight ensembles with a reduced number of masks too. Then you could directly expose the effect of weight tying on the performance of the induced ensemble, while conveniently avoiding the need to 'max-out' (heh) the size of the ensembles you train with untied weights. I.e., plot the expected performance of both tied-weight ensembles and untied-weight ensembles as a function of the number of distinct masks (i.e. ensemble members) used in training.\\n\\nWhile the untied-weight ensembles would still be implicitly 'averaging an exponential number of models' at runtime, you would only be explicitly training as many of those models as are explicitly trained in the corresponding untied-weight ensemble.\"}", "{\"review\": \"We thank all of our reviewers for their very useful feedback. We are actively working on a revised version of the manuscript, which will be available by the end of the week.\\n\\nTwo of the reviewers mentioned difficulty interpreting Figures 1 and 2. We apologize that these were not as clear as they could be and are endeavouring to improve them. It seems likely we will simplify with a scatterplot of test error (approximate) versus test error (exact), and test error (arithmetic mean-exact) versus test error (geometric mean-exact). The main issue with these, especially in the case of Figure 1, is that the differences are barely noticeable and all points appear to fall on the y=x line.\", \"anonymous_3e6b\": [\"We have made efforts to increase the number of masks used in Section 6. We will endeavour to further increase this for the final copy. As far as larger datasets, while this is an interesting direction, we believe that since it is a regularization method, the problems solved by dropout (that of overfitting) are still very acutely present and studiable on small, relatively simple datasets, which have the added benefit of being amenable to the kinds of exhaustive enumeration of masks that we have carried out.\", \"The investigations in sections 4 and 5 specifically concern the properties of dropout-trained networks at test-time, therefore we felt comparisons to networks trained without dropout to be unwarranted as it is not clear what these comparisons would be aiming to demonstrate. Sections 6 concerns the performance of untied weight ensembles versus dropout; we can easily add a non-dropout baseline. Section 7\\u2019s results are all taken with respect to a network with identical hyperparameters trained without dropout; the test error without dropout is plotted on the X axis and test error with dropout (or dropout boosting) is plotted on the Y. We apologize that this was not clear, we can make this clearer in the axis labels.\", \"Thank you for the suggestion for a point-by-point summary of all possible (or at least all of the ones considered in the paper) interpretations of dropout, we will add this.\", \"While some of the conclusions presented in this paper undoubtedly apply to maxout as well, we felt that thoroughly addressing the very popular case of rectified linear networks was important. Had we repeated all of our analyses with maxout (bearing in mind the additional confounding factor of the pool size hyperparameter associated with maxout units) the paper would\\u2019ve extended well beyond the recommended length.\", \"We apologize that these were not clear. Differences are relative to the magnitude reported on the x axis. See our comments above.\"], \"anonymous_925a\": [\"We appreciate the feedback on Figure 1 and 2. Reviewer 3e6b also found them confusing, so we will be addressing this (see comments above).\", \"Regarding Figure 3, we chose 120 in order to match Nitish Srivastava\\u2019s similar figure, but ran 360 because we would then have 3 independent ensembles with which we could report error bars. We have already increased the number of networks trained and will plot the curve all the way until the end, even without error bars.\"], \"anonymous_eb20\": [\"Thank you for your constructive comments. We will address all of them in turn. A few specifics that warrant a particular reply:\", \"We do cite Jarrett et al in our Introduction, we will add a further citation where you suggest.\", \"We speculate that since the scaling is exact in the case of linear networks, locally linear networks (being \\u201cclose\\u201d to linear, at least in a small region around training examples) will more closely approximate the ensemble geometric mean than with saturating nonlinearities. We will explicitly state this.\", \"The arithmetic mean is roughly as expensive as the geometric mean, but lacks a high quality tractable approximation like the weight scaling trick. We will explicitly state this.\", \"In terms of the stochastic estimator of the ensemble gradient, dropout boosting employs masks drawn from the same distribution as dropout bagging (a.k.a. ordinary dropout). Indeed, the first term of the dropout boosting update is simply the update utilized by dropout. The second term is the gradient of the log likelihood of the same randomly chosen submodel (i.e. same dropout mask) but substituting the true targets with the approximately-averaged ensemble prediction. Training proceeds in the same fashion as dropout, where one randomly selected subnetwork (from the same distribution over masks) is updated, but according to a more globally aware criterion. If the mask application is viewed merely as the addition of noise, both criteria employ identical noise, as the selection procedure and random distribution over masks is identical. We will endeavour to make the text clearer on this point.\"]}", "{\"review\": \"We have addressed many of the comments kindly provided by our reviewers and apologize for not posting an updated manuscript sooner. Pending acceptance to arXiv.org, a revised version is available at http://www-etud.iro.umontreal.ca/~wardefar/iclr2014.pdf\\n\\nIn addition to incorporating reviewer feedback, we made one change to the experiments in Section 6 in order to be more charitable to the untied weights ensemble. Specifically, we altered the manner in which we set the hyperparameters used to train the ensemble members -- we now use a network with the same number of hidden units as the best performing dropout network but reoptimize all other hyperparameters to perform well when training via SGD without any masks. This considerably improves the untied ensemble's performance, but dropout still performs considerably better. We also provide the SGD baseline that resulted from optimizing these hyperparameters. We now employ 600 ensemble members; we hope to increase the number of ensemble members for the final copy.\"}", "{\"review\": \"This paper provided a very interesting analysis of dropout, which has recently shown great success in a variety of DNN applications. The paper is well written and the experimental analysis is strong. My comments are mainly to help improve the paper and clarify ambiguity in some places\\n\\n\\u2022\\tSection 1, page1: when you give the equation f(x) = max(0,x) you should state that this is known as a recitified linear unit (ReLU) and provide the appropriate reference, which to my knowledge was first done here: Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. What is the best multi-stage architecture for ob- ject recognition? In Proc. International Conference on Computer Vision (ICCV\\u201909). IEEE, 2009.\\n\\u2022\\tSection 2.2, page 3: why does ReLU work better than sigmoid when using dropout? You say this as well but providing some intuition would be good.\\n\\u2022\\tSection 3, page 3: The reason you have chosen small networks for your initial analysis is so that you can do an exhaustive enumeration. You state this in Section 4 but not 3. You should state this upfront in Section 3, otherwise the reader might think your analysis might not be generalizable to larger data sets.\\n\\u2022\\tSection 3, Page 4: Your training criterion (early stopping, etc) has been done in previous papers. Pls cite one of these so readers know that this is a commonly used approach\\n\\u2022\\tSection 4, Page 5: Pls switch the order of sentences \\u201cThe overall result\\u2026 and In order to make differences..\\u201d to make the flow easier to read. \\n\\u2022\\tSection 4, Page 5: Pls give references for Wilcoxon signed rank test and Bonferroni correction, as not all readers will be familiar with this.\\n\\u2022\\tSection 4, Page 5: There seem to be some outliers in Figure 1, but your significance test shows that this doesn\\u2019t matter. You might want to add a sentence stating that the outliers in Figure 1 are not really significant as shown by the test.\\n\\u2022\\tSection 5, Page 5: Can you comment on how expensive the arithmetic mean is?\\n\\u2022\\tSection 6, page 6: Pls provide reference for \\u201cutilized norm constraint regularization\\u201d \\u2192 this is known as max-norm and can be found in Nitish Shrivastava\\u2019s thesis\\n\\u2022\\tSection 6. Page 6: You need a period after the \\u201c2\\u201d footnote.\\n\\u2022\\tSection 6, page 7: You should also state that the ensemble method required training 360 different networks which is computationally expensive compared to training just 1 network with dropout. \\n\\u2022\\tSection 7, page 8: It is not clear to me how the dropout boosting injects the same amount of noise as dropout, and should be clarified.\"}", "{\"title\": \"review of An empirical analysis of dropout in piecewise linear networks\", \"review\": [\"A brief summary of the paper's contributions, in the context of prior work.\", \"Paper experimentally verifies how relevant are anecdotal descriptions of dropout.\", \"An assessment of novelty and quality.\", \"It is novel, and has a good quality. However, it doesn\\u2019t bring any new ideas. It rather presents experiments which were missed during initial development of dropout.\", \"A list of pros and cons (reasons to accept/reject).\"], \"pros\": [\"Verifies some of previously postulated intuition.\", \"Gives some indications what are the major building blocks of a good regularizer for neural networks.\"], \"cons\": [\"Experiment should be run on a larger datasets where not all possible masks are utilized but a large amount of them (e.g. not all 2^N, but e.g. 10^6 random one).\", \"All the comparisons should be with respect to networks without any dropout or model averaging. Maybe on the datasets which you considered regardless of dropout you would get the same results. Such setting should be compared on all the plots. It was unclear for me if such experiments were executed.\", \"Authors should write point-by-point what are all possible interpretations of dropout, and which one they are going to validate. It would be also good to have some suggestions how other anecdotal interpretations could be validated (e.g. co-adaptation).\", \"The same authors were working on max-out networks, which seems to play well with dropout. It should be explained here (or just experimentally compared) why maxout is a good architecture for dropout.\", \"Figures 1, and 2 are hard to interpret. How should I know if relative difference of 0.1 is big or small.\"]}", "{\"title\": \"review of An empirical analysis of dropout in piecewise linear networks\", \"review\": \"This paper provided a very interesting analysis of dropout, which has recently shown great success in a variety of DNN applications. The paper is well written and the experimental analysis is strong. My comments are mainly to help improve the paper and clarify ambiguity in some places\\n\\n\\u2022\\tSection 1, page1: when you give the equation f(x) = max(0,x) you should state that this is known as a recitified linear unit (ReLU) and provide the appropriate reference, which to my knowledge was first done here: Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. What is the best multi-stage architecture for ob- ject recognition? In Proc. International Conference on Computer Vision (ICCV\\u201909). IEEE, 2009.\\n\\u2022\\tSection 2.2, page 3: why does ReLU work better than sigmoid when using dropout? You say this as well but providing some intuition would be good.\\n\\u2022\\tSection 3, page 3: The reason you have chosen small networks for your initial analysis is so that you can do an exhaustive enumeration. You state this in Section 4 but not 3. You should state this upfront in Section 3, otherwise the reader might think your analysis might not be generalizable to larger data sets.\\n\\u2022\\tSection 3, Page 4: Your training criterion (early stopping, etc) has been done in previous papers. Pls cite one of these so readers know that this is a commonly used approach\\n\\u2022\\tSection 4, Page 5: Pls switch the order of sentences \\u201cThe overall result\\u2026 and In order to make differences..\\u201d to make the flow easier to read. \\n\\u2022\\tSection 4, Page 5: Pls give references for Wilcoxon signed rank test and Bonferroni correction, as not all readers will be familiar with this.\\n\\u2022\\tSection 4, Page 5: There seem to be some outliers in Figure 1, but your significance test shows that this doesn\\u2019t matter. You might want to add a sentence stating that the outliers in Figure 1 are not really significant as shown by the test.\\n\\u2022\\tSection 5, Page 5: Can you comment on how expensive the arithmetic mean is?\\n\\u2022\\tSection 6, page 6: Pls provide reference for \\u201cutilized norm constraint regularization\\u201d \\u2192 this is known as max-norm and can be found in Nitish Shrivastava\\u2019s thesis\\n\\u2022\\tSection 6. Page 6: You need a period after the \\u201c2\\u201d footnote.\\n\\u2022\\tSection 6, page 7: You should also state that the ensemble method required training 360 different networks which is computationally expensive compared to training just 1 network with dropout. \\n\\u2022\\tSection 7, page 8: It is not clear to me how the dropout boosting injects the same amount of noise as dropout, and should be clarified.\"}", "{\"title\": \"review of An empirical analysis of dropout in piecewise linear networks\", \"review\": \"The authors attempt a further understanding of dropout through a set of empirical analyses that test a number of questions: 1, how close is the weight-scaling approximation to the geometric mean; 2., how good is the geometric mean compared to the arithmetic mean for classification; 3., Is the role of weight-tying in dropout important; and 4., is the ensemble aspect of dropout important compared to the benefit of using masking noise.\\n\\nThe conclusions of the authors are convincing, and the paper is a illuminating companion to some of the more theoretical analyses of dropout that have been presented recently. The dropout bagging vs dropout boosting is especially interesting. Standard dropout, which most resembles ensemble bagging, is compared to a version of weight-tied boosting. This comparison is constructed to try to determine whether there is a benefit to the bag ensemble, where each sub-model is independently tested and trained, compared the weight-tied boosting, where each sub-model is trained based on the performance of the entire ensemble. In weight-tied boosting, the benefit of weight-tying and masking noise are preserved, but not the independent sub-model training. The results seem to show that the independent ensemble/bagging aspect of dropout is important, rather than just the noise or the weight-tying.\\n\\nThe submission is relevant to the ICLR community. The experiments are carefully chosen and the conclusions are not overstated. The only significant barrier to publication is the lack of analysis of the empirical data and the figures, which are poorly chosen and not well explained. Figures 1 and 2 are difficult to read/interpret. A single summary/analysis for each of the 2 sets of data points would be very helpful. Figure 3 only goes to 120 ensemble members, but the text describes results at 360. It would be valuable to plot the full results, even if it is flat after 120.\"}" ] }
bb7SwHahSUpiq
Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural Images
[ "Taichi Kiwaki", "Takaki Makino", "Kazuyuki Aihara" ]
We pursue early stopping that helps Gaussian Restricted Boltzmann Machines (GRBMs) to gain good natural image representations in terms of overcompleteness and data fitting. GRBMs are widely considered as an unsuitable model for natural images because they gain non-overcomplete representations which include uniform filters that do not represent sharp edges. We have recently found that GRBMs once gain and subsequently lose sharp edge filters during their training, contrary to this common perspective. We attribute this phenomenon to a tradeoff between overcompleteness of GRBM representations and data fitting. To gain GRBM representations that are overcomplete and fit data well, we propose approximated infomax early stopping for GRBMs. The proposed method enables huge performance boosts of classifiers trained on GRBM representations.
[ "grbms", "natural images", "early stopping", "grbm representations", "revisiting gaussian rbms", "overcompleteness", "data fitting", "infomax early stopping", "gaussian", "boltzmann machines" ]
submitted, no decision
https://openreview.net/pdf?id=bb7SwHahSUpiq
https://openreview.net/forum?id=bb7SwHahSUpiq
ICLR.cc/2014/conference
2014
{ "note_id": [ "44ip4258KGxNA", "wgvNzOb-kRw-a", "nB6IBHqgjJ9Px", "H9BQsORED6H4X", "eh41hfmOBvUHN", "KKryduAqwvKqj", "Q-BGJoM3VsJ32", "llAo1wLFVa1Fy" ], "note_type": [ "comment", "comment", "review", "comment", "review", "review", "review", "comment" ], "note_created": [ 1392741180000, 1392751920000, 1390886160000, 1392741000000, 1389041100000, 1391729160000, 1390949220000, 1392741960000 ], "note_signatures": [ [ "Taichi Kiwaki" ], [ "Taichi Kiwaki" ], [ "anonymous reviewer dd7e" ], [ "Taichi Kiwaki" ], [ "KyungHyun Cho" ], [ "anonymous reviewer f20c" ], [ "anonymous reviewer e42c" ], [ "Taichi Kiwaki" ] ], "structured_content_str": [ "{\"reply\": \"Dear Dr. KyungHyun (Anonymous dd7e):\\n\\n=====summary of the review=====\\nThe reviewer first pointed out that he and his colleague are working on a resembling measure for the usefulness of neural representations. He further questioned whether the concept sketched in Fig 2 (d) has quantitative evidences. He also questioned the relationship between MI/AMI and sparsity. He next suggested that 1-step reconstruction error might be used for early stopping in place of AMI. He finally pointed out the word 'overcomplete' is used without a clear definition in our draft. \\n===============================\\n\\nThank you for your kind comments/reviews. Upon the connection between your study and ours, we are correcting our draft so that the relationship between these studies is clarified. \\n\\nOn Fig 2 (d) (and section 3), we have rewrite the section and introduced an experiment on a small, tractable GRBM with which a quantitative analysis is possible. We are also going to introduce an overcomplete measure, which we describe in the end of this response. \\n\\nOn the relationship between MI/AMI and sparsity, our understanding is currently limited to give a clear answer. However, experiments on Bernoulli-RBMs (not included in the draft) suggest that the filter attenuation phenomenon only occurs when RBMs gain sparse representations (including cases where such representations are naturally gained without regularization). \\n\\nThe idea of early stopping using 1-shot reconstruction error is interesting. However, through our experiments, we have observed that 1-shot reconstruction error only decreases during GRBM training and does not show a feature that once increase and then decrease, unlike AMI does. We consider this reflects less-sensitivity of MI to the overcompleteness than AMI, as we discussed in section 4. \\n\\nFinally, we are going to define a quantitative measure of overcompleteness in the current revision. We hesitate to use AMI as an overcomplete measure because we regard that AMI reflects both data-likelihood and overcompleteness. Instead of AMI, we are introducing a measure based on counting the number of possible hidden configurations that reproduce data samples. This count represents overcompleteness of a representation because there are multiple configurations that reconstruct a data point when representation is overcomplete. We have analyzed the measure and shown that computation of the measure is tractable under approximation where hidden configurations are relaxed into a continuous space.\"}", "{\"reply\": \"Dear Anonymous f20c:\\n\\n=====summary of the review=====\\nThe reviewer first pointed out that the word usage like 'overcomplete', 'useful', 'uniform' are vague. Second, he suggested that the discussion on the phenomenon is unnatural and alternative explanations would be possible. Specifically, he pointed out three points: first, shrinkage of low-pass filters can be a more reasonable explanation. Second, the behavior of FED in the introduction indicates overfitting. Finally in section 3, our hypothesis of GRBM filter number adjustment sounds unnatural. Third, he suggested that the computation of mutual information is not clear. Fourth, impact of our work is limited in terms of dataset and performance. Finally, he made a suggestion on alternative measures for early stopping. \\n===============================\\n\\nFirst of all, we are going to correct our word usage by replacing vague expressions with more specific ones (e.g., 'useful filters'->'edge or gradient filters', 'uniform filters'->'near-zero'). Particularly for the definition of overcompleteness, we are introducing an overcompleteness measure in the next revision. This measure is based on counting the number of possible hidden configurations that reproduce data samples. We have shown that the measure can be computed under an approximation where the hidden configurations are relaxed into a continuous space. \\n\\nOn the alternative explanations, let us clarify several points. First, shrinkage of low-pass is somewhat different from the phenomenon that we observed. It is true that the filters that we call 'attenuated' or 'zero' after prolonged training are actually low-pass filters with small amplitudes. However, such filters didn't evolve from large amplitude low-pass filters through rescaling. They rather evolved from localized Gabor-like filters. It is more precise to describe that such filters evolved from mixtures of a large amplitude Gabor-like component and small amplitude low-pass component through attenuation of the Gabor-like component. Therefore, the number of filters that attain Gabor-like features decreases as GRBM training proceeds. Observation on rescaled GRBM filter images also revealed this. \\n\\nSecond, in the introduction, we intended to argue that overfitting is less likely responsible for the phenomenon, not to deny overfitting. Actually, the increase in FED indicates an overfitting effect as you pointed out (Test data log-likelihood will be useful for monitoring overfitting but AIS estimations were too unreliable to report). We now realizes that the small values of FED can not be a direct evidence for our claim and the mention on it is misleading. However, we consider that our claim still holds because overfitting do not explain the drop of classification performance on training data, which is the key feature of the phenomenon. \\n\\nThird, we have found that the discussion on our hypothesis is not sufficient. Let us supplement several points. To begin with, the highest data-likelihood is not necessarily achieved by GRBM representations where every filter represents single training case, despite the intuition. The key is co-activation of filters in similar directions. Suppose that a representation has two filters $vec{w}_1$ and $vec{w}_2$ exactly point at two different data points. When these filters are in similar directions ($vec{w}_1 approx vec{w}_2$), these two filter will generate Gaussian components not only at $vec{w}_1$ and $vec{w}_2$, but also at $vec{w}_1+vec{w}_2$ where no data points lie (biases are omitted for clarity). We can understand this by considering a MCMC sampling procedure from such a GRBM, where sample points generated by $vec{w}_1$ likely elicit $vec{w}_2$ as well as $vec{w}_1$ itself (and vice versa), resulting co-activation of two filters that generates a component at their composition. In cases where hundreds of visible units and even larger number of hidden units are involved, this effect can be quite severe because the number of GRBM Gaussian components is exponential in the number of (non-dead) hidden units, and the number of orthogonal filters is limited by the number of visible units and thus, most of the filters become linear independent. The highest data-likelihood, therefore, is achieved by representations which assign such exponential number of components to data points. Such representations, in general, are not a copy of data points. \\n\\nA supplement explanation is also needed for our conclusion that the number of effective GRBM filters matches the data dimensionality. We drew our conclusion from an assertion that the number of GRBM components is at the same order of the volume of space where data points lies. Because the volume is $O(exp(data_dimensionality))$ and the number of GRBM components is $O(exp(number_of_effective_filters))$, we concluded that $number_of_effective_filters approx data_dimensionality$ by taking the exponents. Let us note that these arguments were included in our initial draft but later removed to keep the draft within 9 pages. We are planing to add the argument again in the next revision. \\n\\nOn computation of mutual information, we have realized that our explanation in the current draft is misleading. First of all, we computed mutual information with a joint $p_GRBM(h_i|v) p_data(v)$ where $p_GRBM(v)$ in $p_GRBM(h_i,v)$ is replaced with $p_data(v)$ as you mentioned. However, we hesitated to call it an approximation because we are only interested in measuring statistics from this joint, not from $p_GRBM(h_i,v)$. $p_GRBM(h_i|v) p_data(v)$ can be considered as a joint distribution over $h_i, v$ when a GRBM is used to encode data generated from $p_data(v)$. From this joint distribution, we can know how efficient a GRBM represents data with its hidden units. We didn't mention this clearly in the draft. Second, $S_D(H_i)$ is computed as $S_D(H_i) = -p_D(H_i=0)log(p_D(H_i=0)) -p_D(H_i=1)log(p_D(H_i=1)) = S(p_D(H_i))$ where $p_D(cdot)$ is in Eq.(4) and $S(cdot)$ is the entropy functional. The difference between $S_D(H)$ and $S_D(H|D)$ is the order of the summation over data points and entropy functional. We have found these points are not obvious as well, and are going to correct the draft. \\n\\nWe admit that our empirical evidences are limited. However, we consider that some extent of generality of our reports can be provided through expensive exploration of the GRBM parametric effects on the phenomenon, and the discussion (with the supplement above) on the mechanism of the phenomenon along with the results of an abstract toy data experiment. \\n\\nThe limitation over performance improvements is mainly due to the shallowness of the architecture. If we apply our method to deeper models, we believe that much larger performance leverage can be obtained. \\n\\nFinally, we agree that the other measures might be used instead of our proposed measure. We find this possibility should be examined in the future.\"}", "{\"title\": \"review of Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural Images\", \"review\": \"Dear authors,\\n\\nLet me reveal my identity before I continue. I'm Kyunghyun Cho who wrote the earlier review (by an unexpected coincidence). The previous comments reflect what I wanted/want to say as an official reviewer, and I will not write another separate review. \\n\\n- Cho\"}", "{\"reply\": \"Dear Dr. KyungHyun (Anonymous dd7e):\\n\\n=====summary of the review=====\\nThe reviewer first pointed out that he and his colleague are working on a resembling measure for the usefulness of neural representations. He further questioned whether the concept sketched in Fig 2 (d) has quantitative evidences. He also questioned the relationship between MI/AMI and sparsity. He next suggested that 1-step reconstruction error might be used for early stopping in place of AMI. He finally pointed out the word 'overcomplete' is used without a clear definition in our draft. \\n===============================\\n\\nThank you for your kind comments/reviews. Upon the connection between your study and ours, we are correcting our draft so that the relationship between these studies is clarified. \\n\\nOn Fig 2 (d) (and section 3), we have rewrite the section and introduced an experiment on a small, tractable GRBM with which a quantitative analysis is possible. We are also going to introduce an overcomplete measure, which we describe in the end of this response. \\n\\nOn the relationship between MI/AMI and sparsity, our understanding is currently limited to give a clear answer. However, experiments on Bernoulli-RBMs (not included in the draft) suggest that the filter attenuation phenomenon only occurs when RBMs gain sparse representations (including cases where such representations are naturally gained without regularization). \\n\\nThe idea of early stopping using 1-shot reconstruction error is interesting. However, through our experiments, we have observed that 1-shot reconstruction error only decreases during GRBM training and does not show a feature that once increase and then decrease, unlike AMI does. We consider this reflects less-sensitivity of MI to the overcompleteness than AMI, as we discussed in section 4. \\n\\nFinally, we are going to define a quantitative measure of overcompleteness in the current revision. We hesitate to use AMI as an overcomplete measure because we regard that AMI reflects both data-likelihood and overcompleteness. Instead of AMI, we are introducing a measure based on counting the number of possible hidden configurations that reproduce data samples. This count represents overcompleteness of a representation because there are multiple configurations that reconstruct a data point when representation is overcomplete. We have analyzed the measure and shown that computation of the measure is tractable under approximation where hidden configurations are relaxed into a continuous space.\"}", "{\"review\": \"This is an interesting paper on Gaussian RBMs. The authors' explanation in Sec. 3 sounds convincing to me. The illustration in Fig. 2 (d) agrees well with my (and probably others') observation that the classification performance obtained by the features extracted by an RBM (either binary or Gaussian) does not correlate well with the test log-probabilities. Fig. 1 (a)-(d) agree as well with my own experience of training GRBMs on image patches.\\n\\nA similar idea of using the mutual information (or its upper-bound) to measure the usefulness of hidden neurons of an RBM/DBM was proposed by me and co-authors recently at ICONIP 2013 (Berglund et al., 2013). We were able to observe a similar trend of decreasing mutual information of quite a number of hidden neurons as training continues, which was also shown by the authors in Fig. 3 (B). The authors' approach of using this measure as an early-stopping criterion is very clever and seems to work well for the purpose of training a GRBM as a feature extractor. As the authors mention in Sec. 5.2.1, it is possible to use this measure for more complicated models such as DBM (which we tried in our paper), which makes this approach more interesting.\\n\\nOne potential point for improvement I can think of is to actually measure how other statistical quantities of a GRBM correlate with MI (or AI) as well. If the log-likelihood of a GRBM is measured, will one actually see the trend in Fig. 2 (d) (if we replace the overcompleteness with the MI or AI)? Also, since it's known that sparse RBM performs somewhat better as a feature extractor, how would the sparsity correlate with the MI or AI?\\n\\nAnother interesting/important point to discuss in the paper is the relationship to the autoencoders. Vincent et al. (2010) explains that 'training an autoencoder to minimize reconstruction error amounts to maximizing a lower bound on the mutual information between input X and learnt representation Y'. Does it mean that it is possible to simply use the 1-step reconstruction error as an early-stopping criterion instead of the proposed MI? If not, what would be some differences?\\n\\nOne minor point to consider In my opinion is that the term 'overcompleteness' is being somewhat abused without any precise definition. From the paper, I understood that the authors mean the (normalized) number of 'close-to-orthogonal' filters by 'overcompleteness', but this seems a bit too loose a definition (without any actual definition in the paper). Though, I guess the proposed measure itself can be used to measure the overcompleteness. \\n\\n= Refs = \\nBerglund, M., Raiko, T. and Cho, K. Measuring the Usefulness of Hidden Units in Boltzmann Machines with Mutual Information. ICONIP 2013.\\nVincent, P., Larochelle, H., Lajoie, I., Bengio, Y. and Manzagol, P. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. JMLR. 2010.\"}", "{\"title\": \"review of Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural Images\", \"review\": \"In this paper the authors observe an interesting training dynamics when training GRBMs on CIFAR patches: filters first become selective to the usual Gabor looking features but then many faint and decay to 0 towards the end of training. From this observation they propose to measure an approximation to the mutual information between input and hidden units to detect the time when most features are used. They demonstrate that this is an effective method to automatically select the best checkpoint to use for discrimination on CIFAR dataset.\\n\\nPros\\n- sufficient novelty: I am not aware of other works making this observation and proposing this measurement of goodness of a trained GRBM\\n- relevance: the problem of automatically selecting good features is very relevant many working on unsupervised learning (one of the main themes of this conference).\\n\\nCons\\n- the paper is not very clear. I find the authors' use of words like 'overcomplete', 'useful', etc. very vague. I will explain better below what I did not quite understand.\\n- technically I am not sure the paper is correct; I did not follow how the authors measure mutual information exactly. More details below.\\n- the impact of this work seems rather limited: the empirical validation is done on CIFAR only showing modest improvements.\\n\\nI will expand here on what I did not understand or did not agree.\\nFirst of all, the fluency of English should be improved. The choice of words is often too vague. For instance, in the abstract the authors say:\\n\\u201c gain non-overcomplete representations which include uniform \\ufb01lters that do not represent useful image features\\u201d.\\nWhat does this mean? What are 'uniform filters' or \\u201c useful \\ufb01lters\\u201c? It would be nice if every part of the paper was self-explanatory, concise yet precise in its meaning.\\n\\nSecond, I am not convinced by the arguments about why filters decay to zero. From fig. 1, it seems to me that filters do not decay to zero at all. The range of values of filters that become low-pass shrinks, which is rather different and perhaps expected. Most energy is in the low frequencies, however hidden units are forced to be in {0,1} which forces the low-pass (less localized) filters to rescale their values to a smaller range. If the authors rescaled each filter independently they may see a different picture (please, verify if this is correct).\\nBesides, it would be useful to get an estimate of the log-probability of the training data during training: is that steadily increasing? The monotonic behavior of FED seems to show an overfitting effect. I am not sure the hypothesis chosen by the authors in the introduction is the only one that can explain their observations.\\nSimilarly in sec. 3, I do not understand the authors' conclusions. The number of hidden units need not to match the dimensionality of the data. Without any regularization, the highest likelihood on the training set would be achieved by having each hidden unit represent a single training sample, and therefore, the highest likelihood can be achieved by having as many hidden units as training samples (although this would be horrible on the test set).\\nFrom this perspective, I did not understand what happened in the example of fig. 2A.\\nDid the authors use any regularization? How did the authors measured 'overcompleteness' and 'fitness'? \\n\\nThird, I did not understand how the authors computed mutual information.\\nFirst, mutual information assumes a joint distribution over data and hidden units. Are the authors replacing p_GRBM(v) with p_data(v)? \\nMoreover, I did not follow how the authors compute S_D(H). Shouldn't it be the entropy of the hidden units? which means marginalizing the visibles to get p_GRBM(h) and then computing the entropy. The authors should report the formula they used and explicitly mention the approximations used. In the current draft, I do not understand how S_D(H) is computed or differs from the conditional entropy term.\\nFor the sake of clarity, I would recommend to include in the main body of the paper the definition of bar{AMI} and tilda{AMI} as well.\\n\\nFinally, empirical evidence is shown only on the CIFAR dataset. It is not obvious this is a general finding/method. \\n\\nOverall, this is an interesting paper. Unfortunately, I did not fully understand it nor I was fully convinced by its claims. I recommend a good rewrite of the paper to address these concerns. In general, the authors mention that this method is able to make GRBM a better model of natural images and contrast it to other models like SS-RBM, gMRF, etc. But actually, what they propose has little to do with these other methods (and it could be applied to these methods as well). The proposed method is about extracting discriminative features by measuring a quantity that correlates with number of non-dead filters. The question is then: are there even simpler measures (like variance across samples of the corresponding filter responses or variance of the coefficients in the filters)?\"}", "{\"title\": \"review of Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural Images\", \"review\": \"This paper is in the domain of using GRBMs to train filters which can then be convolved and pooled before an L2-SVM is used for classification. The overall idea of finding degraded classification results after prolonged RBM training is an interesting one. The authors then proposes an information criterion for early stopping. The paper is well written and makes link between AMI and filter quality and classification performance.\\n\\nThe fact that GRBM lose useful filters might be due to contrastive divergence training. Also, the concept of overfitting is hard to decipher here because learning GRBM filters is not for directly reducing classification error.\\n\\nThe argument that validation error is too expensive to compute is only unique to a framework where a probabilistic model is used to pretrain filters, which are then used in a discriminative pipeline. For majority of discriminative based models, this argument would not apply and early stopping could simply be performed based on validation error.\"}", "{\"reply\": \"Dear Annonymous e42c:\\n\\nThe reply above is for the first reviewer. Please skip this.\\n\\n=====summary of the review=====\\nThe reviewer first suggested a possibility that the filter attenuation phenomenon is caused by CD approximation. He then pointed out our discussion on overfitting is obscure. He finally pointed out that the benefits of our criterion over validation error are limited to cases where unsupervised training is used to pretrain discriminative models. \\n===============================\\n\\nTo begin with, our experimental results do not indicate that the phenomenon is due to CD approximation. First in experiments on CIFAR, GRBMs trained by not only CD but also by PCD showed the phenomenon. Because the PCD gradient is regarded as an unbiased estimate of the true gradient, this result indicates CD approximation is less likely responsible for the phenomenon. Second in the toy data experiment, where we used the true gradient without any approximation, the phenomenon is also observed. This supports that the phenomenon is not caused by the approximation as well. \\n\\nWe now find that the discussion on overfitting is not clear, therefore we here supplement an explanation. As you pointed out, GRBM overfitting is not directly related to the classification performance of SVMs. However, it can be still expected that if a GRBM overfits training data, test data accuracy can be degraded because the GRBM representation is specifically tuned for training data and loses generality. However, this explanation cannot be applied to the decline in training data accuracy in our observation. Therefore, we sought another hypothesis which we described in section 3 (Supplement discussion on section 3 is at the response to the third reviewer (f20c), and this will be added to the final draft). \\n\\nFinally, it is true that our claim on computational efficiency only holds in cases where unsupervised learning is used to pretrain supervised models. However, combination of unsupervised and supervised training is still under active research and is potentially important because this learning model can be readily applied to to semi-supervised learning problems with enormous unlabeled data. We therefore consider our research maintains sufficient significance.\"}" ] }
mugzy2nI-Ayi1
Learning Non-Linear Feature Maps, With An Application To Representation Learning
[ "Dimitrios Athanasakis", "John Shawe-Taylor", "Delmiro Fernandez-Reyes" ]
Recent non-linear feature selection approaches employing greedy optimisation of Centred Kernel Target Alignment(KTA) exhibit strong results in terms of generalisation accuracy and sparsity. However, they are computationally prohibitive for large datasets. We propose randSel, a randomised feature selection algorithm, with attractive scaling properties. Our theoretical analysis of randSel provides strong probabilistic guarantees for correct identification of relevant features. RandSel's characteristics make it an ideal candidate for identifying informative learned representations. We've conducted experimentation to establish the performance of this approach, and present encouraging results, including a 3rd position result in the recent ICML black box learning challenge as well as competitive results for signal peptide prediction, an important problem in bioinformatics.
[ "randsel", "feature maps", "application", "representation", "recent", "feature selection", "greedy optimisation", "kta", "exhibit strong results" ]
submitted, no decision
https://openreview.net/pdf?id=mugzy2nI-Ayi1
https://openreview.net/forum?id=mugzy2nI-Ayi1
ICLR.cc/2014/conference
2014
{ "note_id": [ "JJOimESUine4P", "YTZaYOARqre6g", "CoxYCghVSMD2Z", "T9PA0NgIcNTUw", "DOzK-BQbKw-J1" ], "note_type": [ "review", "review", "review", "review", "review" ], "note_created": [ 1391820000000, 1391858640000, 1391820000000, 1391431140000, 1392763320000 ], "note_signatures": [ [ "anonymous reviewer c360" ], [ "anonymous reviewer 89fd" ], [ "anonymous reviewer c360" ], [ "anonymous reviewer a39c" ], [ "Dimitris Athanasakis" ] ], "structured_content_str": [ "{\"title\": \"I consider 'Representation Learning' to be a superset of 'Learning Non-Linear Feature Maps'. So it doesn't make much sense for the former to be an application of the other. Also both are pretty generic. Pretty much any feature learning method except PCA learns non-linear feature maps. Seems like you really could come up with a more informative title.\\nI'd argue your title should have the phrase 'Feature Selection' in it. And probably not 'Feature Learning.'\", \"review\": \"I fully confess in advance that I am not a very good reviewer for this paper. I have no real research experience with kernel learning or feature selection. As such, I\\u2019ll be reporting my scores with low confidence. I haven\\u2019t made any attempt to evaluate the sensibility or novelty of the proposed method itself or the related proofs. If none of the other reviewers is able to provide a more confident review I can take some time to study the literature and improve my review.\\n\\nThe main thing I do feel able to review somewhat confidently is the empirical results. It\\u2019s nice that the authors were able to get good accuracy on the Black Box Learning challenge. It looks like that was a fairly competitive challenge, with over 200 competitors, and they took 3rd place. It\\u2019s also nice that they were able to improve over SignalP\\u2019s state of the art result on the cleavage site prediction task.\\n\\nOne thing I\\u2019m a bit concerned about is whether the proposed method improves over the state of the art for feature selection methods. As the authors state, it is somewhat difficult to compare performance on the cleavage site prediction task. Beyond these issues, it\\u2019s also not clear to me that randSel improves over state of the art feature selection methods. It looks like it does improve over RFE, but it would be nice if there was a baseline run by other authors. It\\u2019s not clear to me that RFE is a state of the art method to beat though. It would be nice to have more explanation of the significance of beating SignalP.\", \"detailed_comments\": \"\", \"abstract\": \"missing a space before the paren (this actually happens throughout the paper, not just in the abstract)\", \"fig_1\": \"You might want to put all the different rows on the same scale so that it\\u2019s possible to visually compare the height of the bars across rows.\", \"page_4\": \"\\u201cSo far have\\u201d: there is a word missing here\", \"page_6\": \"\\u201cThe original data where projected\\u201d -> \\u201cwere projected\\u201d\\n\\t\\u201cThe organizers did not reveal the source of the dataset\\u201d: this sounds like the organizers did not reveal the source of the data was SVHN. You might want to specify that you didn\\u2019t know it was created by multiplication by a random projection matrix either.\\n\\t\\u201cwhere provided\\u201d -> \\u201cwere provided\\u201d\\n\\t\\u201cranking third in both cases\\u201d: \\n\\t\\tHere\\u2019s the public leaderboard: http://www.kaggle.com/c/challenges-in-representation-learning-the-black-box-learning-challenge/leaderboard/public\\n\\t\\tAccording to this, Bing Xu is 3rd. But according to the report put out by the organizers ( http://arxiv.org/pdf/1307.0414.pdf ) you guys actually were 3rd and it sounds like Bing Xu was ranked worse than 3rd originally. Did you delete your kaggle account and get taken off the leaderboard or something?\", \"supplementary_material_t\": \"The box around 'OVER-COMPLETE LEARNED REPRESENTATION' is too tight\"}", "{\"title\": \"review of Learning Non-Linear Feature Maps, With An Application To Representation Learning\", \"review\": \"The paper proposes a new and fast approach for selecting features using centered kernel target alignment.\\n\\nAs far as I can tell, there seems to be two contributions: 1. use kernel alignment as a way of selecting features 2. random subsample data so that the method can be made scalable. However, I think the algorithmic innovation seems to be thin.\\n\\nIn the description of the algorithm (section 4), the paper keeps referring to bootstrap (size). I think that is a misnomer --- the algorithm listing (Algorithm 1) changes to subsampling, which I think what the paper is using. If the paper is indeed using boostrap (ie, sampling with replacement), the computational complexity would not be reduced as the # of samples (including repeated ones) will be unchanged. \\n\\nI do not follow the reasoning in Theorem 3.6 to come up with the 'bottom 12.5% ' features need to be thrown away.\\n\\nThe paper does not seem to explain what are the methods 'deep with XYZ' in Table 1. \\n\\nOverall, I think the writing of this paper could use some polishing.\"}", "{\"title\": \"I consider 'Representation Learning' to be a superset of 'Learning Non-Linear Feature Maps'. So it doesn't make much sense for the former to be an application of the other. Also both are pretty generic. Pretty much any feature learning method except PCA learns non-linear feature maps. Seems like you really could come up with a more informative title.\\nI'd argue your title should have the phrase 'Feature Selection' in it. And probably not 'Feature Learning.'\", \"review\": \"I fully confess in advance that I am not a very good reviewer for this paper. I have no real research experience with kernel learning or feature selection. As such, I\\u2019ll be reporting my scores with low confidence. I haven\\u2019t made any attempt to evaluate the sensibility or novelty of the proposed method itself or the related proofs. If none of the other reviewers is able to provide a more confident review I can take some time to study the literature and improve my review.\\n\\nThe main thing I do feel able to review somewhat confidently is the empirical results. It\\u2019s nice that the authors were able to get good accuracy on the Black Box Learning challenge. It looks like that was a fairly competitive challenge, with over 200 competitors, and they took 3rd place. It\\u2019s also nice that they were able to improve over SignalP\\u2019s state of the art result on the cleavage site prediction task.\\n\\nOne thing I\\u2019m a bit concerned about is whether the proposed method improves over the state of the art for feature selection methods. As the authors state, it is somewhat difficult to compare performance on the cleavage site prediction task. Beyond these issues, it\\u2019s also not clear to me that randSel improves over state of the art feature selection methods. It looks like it does improve over RFE, but it would be nice if there was a baseline run by other authors. It\\u2019s not clear to me that RFE is a state of the art method to beat though. It would be nice to have more explanation of the significance of beating SignalP.\", \"detailed_comments\": \"\", \"abstract\": \"missing a space before the paren (this actually happens throughout the paper, not just in the abstract)\", \"fig_1\": \"You might want to put all the different rows on the same scale so that it\\u2019s possible to visually compare the height of the bars across rows.\", \"page_4\": \"\\u201cSo far have\\u201d: there is a word missing here\", \"page_6\": \"\\u201cThe original data where projected\\u201d -> \\u201cwere projected\\u201d\\n\\t\\u201cThe organizers did not reveal the source of the dataset\\u201d: this sounds like the organizers did not reveal the source of the data was SVHN. You might want to specify that you didn\\u2019t know it was created by multiplication by a random projection matrix either.\\n\\t\\u201cwhere provided\\u201d -> \\u201cwere provided\\u201d\\n\\t\\u201cranking third in both cases\\u201d: \\n\\t\\tHere\\u2019s the public leaderboard: http://www.kaggle.com/c/challenges-in-representation-learning-the-black-box-learning-challenge/leaderboard/public\\n\\t\\tAccording to this, Bing Xu is 3rd. But according to the report put out by the organizers ( http://arxiv.org/pdf/1307.0414.pdf ) you guys actually were 3rd and it sounds like Bing Xu was ranked worse than 3rd originally. Did you delete your kaggle account and get taken off the leaderboard or something?\", \"supplementary_material_t\": \"The box around 'OVER-COMPLETE LEARNED REPRESENTATION' is too tight\"}", "{\"title\": \"review of Learning Non-Linear Feature Maps, With An Application To Representation Learning\", \"review\": \"This interesting submission links prior work on KTA with the question\\nof representation learning. I then provides learning theory to\\nunderpin this idea and suggests a nonlinear feature selection\\nalgorithm. Finally, it shows its success for an ICML 2013 benchmark,\\nwhere the method finished with a bronze medal. In addition it shows\\ninteresting/impressive results on bioinformatics, namely cleavage site\\nprediction. This is an interesting paper which should be accepted. \\n\\nSome suggestions/criticism that may help improving the ms:\\n\\n1. it would be nice to briefly discuss runtimes\\n\\n2. so far the LT results seem like a paper within a paper, it would be\\nnice to interweave this better, and maybe show a/the toy simulation\\nwhat the derived bounds mean on concrete data.\\n\\n3. The algorithm seems slightly ad hoc, where do the 12.5% come from\\n\\n4. It would be nice to leave more room to the applications, already\\none seems sufficiently convincing\\n\\n5. Finally, I strongly suggest to further tone down that statement\\nthat the present approach is better than ref 14. I do not see this.\\nFor this statement extensive testing with p-values and a thorough\\nunderstanding of the whys would be required.\"}", "{\"review\": \"We would like to thank the reviewers for their positive feedback and insightful comments.\\n\\nRegarding the theory supporting the algorithm, something that both reviewers a39c and 89fd have brought up, \\nwe have updated the proof sketch to reflect that the supporting theory relies on Hoeffding's bound for U-Statistics and provides a high-confidence bound provided a sufficient sample size. The 12.5% stems from the fact that the sample size requirement is typically too large in practice. Thus a more conservative approach employing an iterative scheme that rejects a percentage of the features after each iteration was used. We try to interweave this better with the paper by providing an additional example where the technical conditions hold more strongly (in the updated figure 1), and contrasting that with the iterative scheme (figure 2). \\n \\nRegarding the comparison of our approach to the existing approach employed by SignalP, we agree with reviewer a39c \\nthat further testing is required and added further emphasis to this statement in the updated text. The entries of Table 1 were renamed to reflect where they come from, in accordance to the suggestion of reviewer 89fd. \\n \\nWe have proceeded to rename the paper 'Principled Non-Linear Feature Selection' in response to the comments of reviewer c360. \\n \\nRegarding the runtime of our approach, I believe that results could be misleading, owing to the fact that currently the method only employs matlab, whereas stability selection and svms are in c-code with a matlab interface. Informally speaking the method was fast enough to process a single fold in the kaggle contest in roughly an hour. For the signal peptide prediction, this time is vastly increased but is still not substantially worse than competing methods.\"}" ] }
LwyBkw8Nh6Y1J
Unit Tests for Stochastic Optimization
[ "Tom Schaul", "Ioannis Antonoglou", "David Silver" ]
Optimization by stochastic gradient descent is an important component of many large-scale machine learning algorithms. A wide variety of such optimization algorithms have been devised; however, it is unclear whether these algorithms are robust and widely applicable across many different optimization landscapes. In this paper we develop a collection of unit tests for stochastic optimization. Each unit test rapidly evaluates an optimization algorithm on a small-scale, isolated, and well-understood difficulty, rather than in real-world scenarios where many such issues are entangled. Passing these unit tests is not sufficient, but absolutely necessary for any algorithms with claims to generality or robustness. We give initial quantitative and qualitative results on a dozen established algorithms. The testing framework is open-source, extensible, and easy to apply to new algorithms.
[ "unit tests", "algorithms", "many", "stochastic optimization optimization", "stochastic gradient descent", "important component", "machine", "wide variety", "optimization algorithms" ]
submitted, no decision
https://openreview.net/pdf?id=LwyBkw8Nh6Y1J
https://openreview.net/forum?id=LwyBkw8Nh6Y1J
ICLR.cc/2014/conference
2014
{ "note_id": [ "YYZjR9P3wo26y", "PPOQcbFlBhbrN", "ssOusJNTuSsgH", "jhEqyhJg7ajiI", "RrSURXpsN9XPV", "mY_srrcBsLrgp" ], "note_type": [ "review", "review", "review", "comment", "review", "comment" ], "note_created": [ 1392137820000, 1391821320000, 1393452600000, 1393347840000, 1391717640000, 1392174540000 ], "note_signatures": [ [ "anonymous reviewer abdc" ], [ "anonymous reviewer b820" ], [ "Tom Schaul" ], [ "Tom Schaul" ], [ "anonymous reviewer 5afd" ], [ "Tom Schaul" ] ], "structured_content_str": [ "{\"title\": \"review of Unit Tests for Stochastic Optimization\", \"review\": \"This paper looks at developing 'unit tests' for stochastic optimization algorithms, which consist of toy objective functions and corresponding gradient noise that are assembled randomly from a suite of various components. The hope is that these tests would allow one to analyse the specific failure cases for various methods, perhaps in order to inform improvements to them.\\n\\nThis paper is mostly just engineering, but the authors seem to have created a fairly versatile tool for generating toy problems with many different characteristics which may well be quite useful in the future. The paper itself doesn't contain much in the way of useful conclusions about the optimization algorithms (mostly different versions of SGD) that are tried. I also have several issues with various aspects of the unit tests themselves, and am not fully convinced that they are testing the 'right' kind of thing or if these tests can tell us much of use about the optimization problems we really care about. \\n\\nIt would have been nice if the paper had demonstrated how these tests could actually inform algorithm design.\\n\\nNonetheless I would recommend that it be accepted. I hope the authors can address some of the issues I've brought up.\", \"detailed_comments\": \"Are these optimization surfaces unimodal? If not, couldn't it be the case that some optimization methods simply will get 'lucky' and bounce their way into a better local basin, whereas other that might be more careful about remaining stable in the face of noise will miss these? This seems to me like it might not be a realistic analogy of the kinds of optimizations we care about, where multimodal landscapes with multiple modes of highly variable quality don't seem to exist. I'm thinking about neural network optimization in particular here, and going on my experience that typically the lowest achievable error from run to run doesn't vary all that much (if at all).\", \"another_major_issue_i_have_is_that_by_testing_local_convergence_of_methods_you_are_really_only_testing_one_them_in_a_single_and_likely_not_too_important_phase_of_optimization\": \"fine convergence to a local min. Often in these cases the method that can deal with the noise the best wins. I'm not sure if this is the best analogy to what happens in deep net optimization, where fine convergence to a local min seems to be only a small and not particularly important phase of optimization (which is often just associated with overfitting).\\n\\nI would wager that the most significant aspect of optimization is the journey towards a local min from very far away along a very curvy path that can possibly lead to other local mins of roughly equivalent quality. It is not clear to me that your unit tests properly capture this aspect of optimization by focusing either on fine convergence to local optima, or the tendency for an optimizer to jump out of one local min into a closely situated and much higher quality one.\", \"page_1\": \"I don't really understand what you are saying in the 'locally' paragraph. Are you saying that you want to assume local optimization because your units tests, if they are to be run quickly, can only consider toy examples with a simple local structure (unlike a neural network which has a 'global structure')? And what does this have to do with non-stationary algorithms (or do you mean non-stationary objective functions?) or 'initialization bias'? Don't you still need to initialize the algorithms in the unit tests and won't this choice affect the relative performance of different methods?\", \"page_3\": \"With these noise prototypes, is it obvious why their mean must recover the gradient (i.e. so they are unbiased)? For the additive case this is clear, since the noise has mean 0, but it is less clear for multiplicative noise. I suppose if the average scale is alpha the multiplicative noise would on average estimate alpha*gradient.\\n\\nThis is an important point, since the stochastic gradients need to be unbiased for many stochastic optimization algorithms to work, at least in theory. \\n\\nCauchy noise, for example, doesn't even have a mean, so this seems a bit problematic. Although I suppose for certain choices of its parameters, the distribution is at least centred around 0. Perhaps this might be good enough in practice. Is this what you did?\\n\\nThe mask-out noise is unbiased as far as I can tell.\\n\\nYou really should discuss the issue of unbiasedness of your noise in general, as this is very important to the theory of these methods.\", \"page_4\": \"It is not clear to me why gradient descent should work at all when it is given vectors from a vector field that is not actually a gradient field. And assuming it does work for certain such fields, the reasons are probably subtle and highly situation dependent. I don't imagine that this can be easily simulated by applying a fixed rotation to the gradient. In fact, I can't even imagine how gradient descent could ever converge if it were given gradient with a fixed rotation. For example, you could just permute the coordinates. That is a valid rotation, but surely this would cause most reasonable algorithms to fail catastrophically.\\n\\nAlso, you should explain concept of curl etc for this audience.\", \"page_5\": \"I think a better way of simulating non-stationarity of the objective function would be to perhaps randomly jiggle or move the various shape and scale parameters that define your objective. Mere translation doesn't seem particularly hard to deal with, or realistic.\", \"page_6\": \"Double the progress of vanilla SGD doesn't seem particularly 'excellent' to me. Perhaps merely 'good'.\", \"page_7\": \"So if I understand correctly, the vertical axis gives the different methods with different hyper parameter choices for each method?\\n\\nMany of these methods, such as Nesterov's accelerated gradient, have automatically adapted hyperparameters, which is sometimes done according to a fixed schedule. The momentum parameter in particular usually isn't supposed to be constant, at least in theory. \\n\\nAnd in general, for stochastic optimization methods, there usually are no guarantees for a fixed learning rate. Some methods like ADAGRAD implicitly anneal the learning rate, while with others, like plain SGD, or accelerated gradient SGD, you need to reduce the learning rate adaptively or at least with a schedule if you plan to have fine convergence to a local min.\\n\\nAre you keeping these fixed, are you are instead varying the hyper-hyperparameters of the methods that adjust the hyperparameters? \\n\\nAre methods with 'thicker' regions ones with more adjustable hyperparameters?\\n\\nIt seems to me that it doesn't really make sense to plot the values for all hyper-parameters, when some are clearly crazy. If a method has lots of hyperparameters, it shouldn't be judged to be 'less robust' if for certain crazy choices of these it diverges. And it isn't always true that we have no way of determining good hyperparameters for methods that have them. Binary search, or perhaps Bayesian optimization are certainly better than exhaustive sweeps. More simply than that, one can just adjust these things on the fly, with a heuristic or manually if needed, as partial progress with sub-optimal hyperparameter choices doesn't need to be thrown out (unless very bad divergence has taken place, and then you can always backtrack to a previous snap-shot of the parameters).\"}", "{\"title\": \"review of Unit Tests for Stochastic Optimization\", \"review\": \"Summary\\n------------\\n\\nThis paper introduces a suite of unit tests for optimization algorithms that attempt to analyze the algorithm performance in very specific and isolated cases (for example : how does it deal with a saddle point or a cliff like structure in the error). This work feels like a needed step in the right direction. As the authors pointed out, passing these tests is not sufficient to claim an algorithm is better than another, but it is necessary to understand some possible weaknesses of the algorithm and for doing a more proper comparison between different algorithms (compared to just looking at the error curve for some randomly selected task and model).\", \"comments\": \"--------------\\n\\nI think the success of such an approach is in the details. The engineering effort to use a specific such framework to test some new algorithm will have a huge impact on whether these unit tests will be successful or not. Unfortunately some of these things are hard to predict right now (and in some sense are beside the point of the paper).\\nThe other thing that one has to keep in mind is the effect of high dimensional spaces on the problem we try to solve. E.g. one can prove that for some family of models (say auto encoders) on some bounded domain some given error will have saddle points regardless of the dimensionality of the input. However the distribution of this saddle points is a different story and it might be hard to say how important it is to deal with saddle points or not for this specific model.\\n\\nBasically what am I trying to say is that the existence of some prototype in real data is easy to assert, but the probability that you will have to deal with said prototype is hard. And this is not a criticism to this work, but more a message that anyone should keep in mind. I guess what I'm trying to point out is that interpreting the results of these unit tests is far from trivial and might be even counter-intuitive.\\n\\nAnother important detail for any such suite of unit tests is the tools that one can use to investigate the results. If I think of the unit test metaphor for code development, usually the results of a large suite of tests is interpreted via failure (and how many tests failed) or success. Running such a suite of experiments on an optimization task is not as easy to interpret. First of all is hard to know which tests we care most about, and is hard to know how the correlation between the failures on different tests affect the algorithm on real task. While the paper mentions that there are tools for visualizing these results, there are not many details given about these tools. I think an interesting question on its own is what are the write visualization of the results and how to interpret them, a question which I don't think is fully answer by the current work.\"}", "{\"review\": \"The updated version of the paper is now visible on arxiv, and the updated code at https://github.com/IoannisAntonoglou/optimBench can reproduce the result figures.\"}", "{\"reply\": \"Thank you for your constructive and detailed comments, we\\u2019ve revised the paper to take them into account, and here are some specific answers:\\n\\n> I am not fully convinced that they are testing the 'right' kind of thing or if these tests can tell us much of use about the optimization problems we really care about. \\n\\nAs reviewer 5afd point out, this paper aims to be a start in this direction. Different users of stochastic gradient methods may care more or less about different issues (e.g. high dimensions, local optima of varying quality, etc), and so a lot is being left to future work. Nevertheless, we argue that most of our proposed unit tests are more or less representative of a (part of an) optimization surface that could be encountered, and that a good algorithm should behave robustly on most of them.\\n\\n> Are these optimization surfaces unimodal? \\n\\nYes, most of the surfaces are unimodal. \\n\\n> If not, couldn't it be the case that some optimization methods simply will get 'lucky' and bounce their way into a better local basin, whereas other that might be more careful about remaining stable in the face of noise will miss these? \\n\\nAll experiments are repeated many times to minimize the effect of \\u201clucky\\u201d runs. Careful algorithms have different properties than more aggressive or stochastic algorithms, and this is should be reflected in them having different strengths and weaknesses on different sets of unit tests -- including but not limited to multimodal surfaces. \\n\\n> Another major issue I have is that by testing local convergence of methods you are really only testing one them in a single and likely not too important phase of optimization: fine convergence to a local min. [...] I would wager that the most significant aspect of optimization is the journey towards a local min from very far away along a very curvy path that can possibly lead to other local mins of roughly equivalent quality. \\n\\nIndeed, some unit tests test fine convergence to a local minimum, but there are many others as well, that test for example non-divergence under high noise, or the local optimization dynamics on shape prototypes that have their optimum at infinity (linear slope, sigmoid, exponential, saddle points, etc.) -- so these latter unit tests verify that an algorithm keeps making progress. In other words, they check for sane behavior during the long \\u201cjourney\\u201d that characterizes most early phases of optimization.\\n\\n> Page 1: In what sense do you mean that these weaknesses are separate from 'raw performance'. If an algorithm is performing well on some task, why would I care if it has some invisible issues on that task, as long as it appears to be working? \\n\\nFor any given task, the best algorithm is determined by its raw performance (and may be a different one each time). This is a separate question from which algorithm is robust in general and likely to work well on new, unknown problems.\\n\\n> Page 1: I don't really understand what you are saying in the 'locally' paragraph. \\n\\nWe have revised this paragraph for clarity.\\n\\n> Don't you still need to initialize the algorithms in the unit tests and won't this choice affect the relative performance of different methods? \\n\\nYes, initialization and algorithm state are one currently unresolved issue, but in section 4.1, we discuss a possible way of addressing this in future work.\\n\\n> You really should discuss the issue of unbiasedness of your noise in general.\\n\\nWe have clarified in section 2.3 that the noise is not necessarily unbiased.\\n\\n> Page 4: How are you generating these random rotations? Are these just like random orthonormal matrices? \\n\\nYes.\\n\\n> Page 4: It is not clear to me why gradient descent should work at all when it is given vectors from a vector field that is not actually a gradient field. And assuming it does work for certain such fields, the reasons are probably subtle and highly situation dependent. \\n\\nThis is true, and in the cited reinforcement literature the issue of when it may converge anyway has been studied extensively.\\n\\n> I don't imagine that this can be easily simulated by applying a fixed rotation to the gradient. \\n\\nIn some simple cases, such as the one in Figure 4, it can, indeed, there the vector field is exactly a gradient field combined with a rotation (and a large class of algorithms do converge in this scenario) -- we have clarified this point in the latest revision.\\n\\n> Page 5: What do you mean in the sentence: 'However, non-stationary optimization can even be important in large stationary tasks, when the algorithm itself chooses to track a particular aspect of the problem, rather than solve the problem globally.'? \\n\\nWe have clarified this in section 2.6, with the details being deferred to the cited [21].\\n\\n> Page 5: I think a better way of simulating non-stationarity of the objective function would be to perhaps randomly jiggle or move the various shape and scale parameters that define your objective. \\n\\nThank you, have added these modifiers to expand the set of unit tests to include all three types of non-stationarity in the revised version.\\n\\n> Page 6: Why didn't you test the method(s) from [9]?\\n\\nWe are presently working on more robust variants of that method, with the help of the presented unit tests actually, to be published shortly.\\n\\n> Page 7: What are these 'groups'? Where do you describe what they are? \\n\\nVertically, each group of rows is one algorithm with different hyperparameters, horizontally, each group of columns is a collection of unit tests with a certain property -- this is now described more clearly in the text.\\n\\n> Page 7: So if I understand correctly, the vertical axis gives the different methods with different hyper parameter choices for each method? [...] Are methods with 'thicker' regions ones with more adjustable hyperparameters? \\n\\nYes and yes. Section 3.1 gives the overview of what these hyperparameters are for each algorithm, and which values are swept over (full details are in the published code).\\n\\n> If a method has lots of hyperparameters, it shouldn't be judged to be 'less robust' if for certain crazy choices of these it diverges. \\n\\nWe try not to make a judgement on which hyperparameters are reasonable. Instead we want to show that our unit tests can be a tool for determining whether a single setting of hyperparameters can be robust in most scenarios, or whether some algorithms have to be tuned to the problem.\"}", "{\"title\": \"review of Unit Tests for Stochastic Optimization\", \"review\": \"This paper bravely proposes to test the empirical convergence of stochastic optimization algorithms using a vast collection of simple and relatively standardized tests. They explain how they construct the tests and perform experiments that lead to a striking visualization (figure 5). Unfortunately none of the compared algorithms appear to solve all the problems robustly.\\n\\nThis idea could appear na\\u00efve because it is not supported by theoretical considerations and represents a purely empirical perspective. However there are many reasons to consider that this idea has great potential. First, similar ideas have worked in other fields. It is customary to compare general optimization codes on a collection of well known benchmark problems, not because it provides a guarantee, but because it provides a sanity check. Second, we must recognize optimization in deep learning systems is still beyond the reach of theoretical analysis. According to the current theoretical knowledge, it should not work. Therefore the best way to investigate such algorithms remains a well designed collection of empirical comparisons. The comparison described in this paper is a good start in that direction.\"}", "{\"reply\": \"Thank you for your comments!\\n\\n> The existence of some prototype in real data is easy to assert, but the \\n> probability that you will have to deal with said prototype is hard. \\n\\nIndeed. We try to argue that the scenarios covered by the proposed unit tests could occur, not that they necessarily occur a lot. But a good algorithm should behave robustly on most of them when they do occur.\\n\\n> While the paper mentions that there are tools for visualizing these \\n> results, there are not many details given about these tools. I think an \\n> interesting question on its own is what are the write visualization of the\\n> results and how to interpret them, a question which I don't think is fully\\n> answer by the current work.\\n\\nIn fact, Figure 5 is such a proposed visualization, and it is described in section 3 (fuller details are available in the published code). It is designed to provide a quick qualitative overview and flag potential weaknesses of an algorithm.\"}" ] }
26CF62DAFs2-K
Learning Type-Driven Tensor-Based Meaning Representations
[ "Tamara Polajnar", "Luana Fagarasan", "Stephen Clark" ]
This paper investigates the learning of 3rd-order tensors representing the semantics of transitive verbs. The meaning representations are part of a type-driven tensor-based semantic framework, from the newly emerging field of compositional distributional semantics. Standard techniques from the neural networks literature are used to learn the tensors, which are tested on a selectional preference-style task with a simple 2-dimensional sentence space. Promising results are obtained against a competitive corpus-based baseline. We argue that extending this work beyond transitive verbs, and to higher-dimensional sentence spaces, is an interesting and challenging problem for the machine learning community to consider.
[ "meaning representations", "tensors", "transitive verbs", "learning", "semantics", "part", "semantic framework", "field", "compositional distributional semantics", "standard techniques" ]
submitted, no decision
https://openreview.net/pdf?id=26CF62DAFs2-K
https://openreview.net/forum?id=26CF62DAFs2-K
ICLR.cc/2014/conference
2014
{ "note_id": [ "9ECjsHg-MXsWH", "11ic1LsAz6LUk", "DT9EX7jt-1Xwm", "4iL2qSHRBri5h", "Hore_5SLB4_m2", "RBmD3BhWN4Bwg" ], "note_type": [ "review", "review", "review", "comment", "review", "review" ], "note_created": [ 1392631920000, 1391496300000, 1392755460000, 1392728940000, 1391712720000, 1391639160000 ], "note_signatures": [ [ "anonymous reviewer 6020" ], [ "anonymous reviewer 67d9" ], [ "Tamara Polajnar" ], [ "Stephen Clark" ], [ "Tamara Polajnar" ], [ "anonymous reviewer 0365" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Type-Driven Tensor-Based Meaning Representations\", \"review\": \"This paper introduces a method to train very low 2-dimensional representations for different syntactic types and evaluates it on a benchmark of selectional preferences.\\n\\n\\nRegarding\\n'assume that words, phrases and sentences all live in the\\nsame vector space. The tensor-based semantic framework is more flexible, in that it allows different\\nspaces for different grammatical types, which results from it being tied more closely to a type driven\\nsyntactic description; however, this flexibility comes at a price, since there are many more\\nparamaters to learn.'\\nIn fact, for non-toy tasks, the number of parameters will be too many. Realistically, to capture cooccurence counts of single words, single vectors need to be at least 100 dimensions, which will then require 1000000 parameters to represent a simple adverb.\\n\\nFurthermore, one can argue that this particular tensor based framework is actually less flexible in that it could not compare the semantic similarity between simple phrases such as 'a rainy day', 'raining all day' and 'sunshine' or 'swim', 'swimmer' and 'swimming'. People are able to easily determine how similar or related these concepts are and a framework that assigns them representations of different dimensionality is not flexible enough to do this.\", \"regarding\": \"'plausibility space to have two dimensions, one corresponding to plausible and one corresponding to implausible.'\\nThis could be simply captured by a single number that is high for plausible and low for implausible?\\nIn fact, in standard machine learning a 2 class softmax (which is being used in this paper) is usually collapsed to a single sigmoid unit.\\n\\nSince the MEN dataset has just been introduced 2 years ago and it is not very well known, it would be great if you described it in your paper.\\n\\nSection 5.1 mentions problems with rare words.\\nHow could this approach be extended to learn representations for rare combinations or words? People can certainly understand a word from very few examples.\\nOn a related note, could this approach ever learn representations for all syntactic types in setups that move beyond trigrams? Presumably, the counts for most such n-tuples will be extremely sparse.\\n\\nSince you used techniques from neural networks to train your model, why not compare to neural networks to predict the same statistics?\\nHow was your SVD word vector representations cross validated? Did you try alternative word vectors?\\n\\nThis paper works in the interesting area of combining discrete syntactic information with distributed representations.\\nIt is well written but some doubts remain about the approach.\\n\\nWhile the paper is not very novel (approach A + approach B) it may expose the ml community to type and syntactic approaches.\"}", "{\"title\": \"review of Learning Type-Driven Tensor-Based Meaning Representations\", \"review\": \"The authors investigate learning tensor representations of verbs using fixed vector representations of nouns, building on Steedman's CCGs and Coecke's work on modeling meaning representations using linear maps (so that, for example, a verb 3-tensor is right-contracted with its object noun vector and left-contracted with its subject noun vector, to output a sentence living in another vector space). In particular, they learn the noun/verb/noun mapping, turning the problem into a binary classification problem (where the binary labels correspond to legal combinations, and impostor combinations), using L2 regularized logistic regression. Results are given on ten verbs, chosen for varying levels of frequency and 'concreteness', and using up to 2000 positive and 2000 negative N/V/N training sentences, constructed using the Google N-Gram corpus and Wikipedia. The results are compared to a baseline where the verb tensors are constructed using the outer product of the vector representations of their subject and object nouns, for legal uses of that verb. Throughout the work the nouns are represented by fixed vectors using a statistical co-occurrence technique based on that of Turney and Pantel. The proposed technique beats the proposed baseline in accuracy.\", \"this_is_an_interesting_and_useful_direction_for_research\": \"to try to construct linear maps that more faithfully represent the functional aspects of language (e.g. a verb is a function that takes two nouns and outputs a sentence). I wonder, however, why the authors didn't just treat the problem as a standard classification problem. Why map to a two dimensional space, one axis of which models plausibility, and the other, implausibility? How is the point (1,1) in this space different from the point (0.5, 0.5) (i.e. what does simultaneously being 'more plausible' and 'less plausible' mean)? Wouldn't it be more natural, and simpler, to just map to the real line? (These questions also apply to Clark's paper from which this idea is inherited.) Also the baseline seems quite weak, as it is constructed from only positive examples. Furthermore the generalization performance of the resulting system was split by sentence, not by noun - that is, nouns that appear in the training set can also appear in the test set, suggesting other, simpler ways of constructing baselines (see below). I do think the paper has some value as a step towards investigating more general linear maps to model language; the link between CCGs and linear maps seems intuitive.\\n\\n\\nSpecific Comments\\n*****************\\n\\nNumbers are section numbers.\\n\\n1.0\\n\\nIt seems a bit of a stretch to call the simple gradient descent method used here, 'deep learning'.\\n\\n2.0\", \"clarity\": \"I think it would help the reader's understanding to explain how the string 'with' in 'eat with a fork' maps to the category ((SNP)(SNP))/NP.\\n\\n3.0\\n\\nAgain, why not use a one dimensional representation for plausibility? What is the intuition for using two dimensions?\\n\\n3.1\\n\\nThe problem appears to decompose into very small decoupled optimization problems (an objective function for each verb) since the verb parameters are independent and the noun vectors are fixed (right?). It would have been interesting to allow the noun vectors to be learned also, perhaps starting from your initial values (but a much harder optimization problem).\\n\\n3.2\\n\\nThe baseline - cosine similarity on the matrix elements of the Kronecker product matrix and that of the target subject/object pair - seems very ad-hoc, and so it's hard to see what we learn by comparing against it.\\n\\n4.2\\n\\nA little more clarification is needed here. From what I understand, you took, for each noun, the top N most relevant context words, each of which is also a vector, thus forming the 'noun context matrix'. But how were the vectors for the context words computed? In the same way as the nouns, thus giving a 10,000 dimensional vector for each context word? Also, when you do the SVD, you will wind up with a matrix representation (constructed from either the left singular vectors or the right, depending on how the problem was set up). How do you wind up with a 20 dimensional vector? If you just used the top 20 singular values, that would seem to throw away crucial information. \\n\\n5.0\", \"there_appears_to_be_a_flaw_in_the_experimental_protocol\": \"no attempt was made (or at least, mentioned) to keep the nouns that appear in the test set, different from those that appear in the training set. If this is so it makes the results weaker, as methods that leverage such coincidences may do much better. For example, we could construct a simple baseline for a given N1-V-N2 test triple by saying that if N1-V appears in the training set, and also if V-N2 does too, then output '1', else output '0'. At least I think it's important to break results out by such a 'did noun occur in training set' stratification.\"}", "{\"review\": \"We have uploaded an updated version which addresses some of the points raised by the reviewers. The new version will be visible after Wed, 19 Feb 2014 01:00:00 GMT.\"}", "{\"reply\": \"> > In fact, for non-toy tasks, the number of parameters will be too\\n> > many. Realistically, to capture cooccurence counts of single words,\\n> > single vectors need to be at least 100 dimensions, which will then\\n> > require 1000000 parameters to represent a simple adverb.\\n> \\n\\nIt is not clear that at least 100 dimensions are needed to encode nouns. For example, Socher et al use 50-dimensional word vectors to encode all word types in their large-scale experiments. We're only seeking to encode nouns using context vectors so it is likely that we can effectively use fewer dimensions.\\n\\nNevertheless you make a valid point that encoding complex types using higher order tensors leads to a large number of parameters. We are currently looking into methods for remaining true to the framework while reducing the estimation, and storage, requirements for more complex types. However we reserve this research for future publications.\\n\\n> > Since the MEN dataset has just been introduced 2 years ago and it is\\n> > not very well known, it would be great if you described it in your\\n> > paper.\\n> \\n> > How was your SVD word vector representations cross validated? Did\\n> > you try alternative word vectors?\\n> \\n\\nThe results in Figure 2 support the fact that the way we apply SVD results\\nwith only a slight reduction in performance with 20 dimensional vectors\\n(0.71) from the full 10,000 dimensional vectors (0.75). We will add a\\nfew sentences describing MEN data and also give results for 20\\ndimensional and full 10,000 vectors on the WS353 data which\\nreaders may be more familiar with.\\n\\n> > Furthermore, one can argue that this particular tensor based\\n> > framework is actually less flexible in that it could not compare the\\n> > semantic similarity between simple phrases such as 'a rainy day',\\n> > 'raining all day' and 'sunshine' or 'swim', 'swimmer' and\\n> > 'swimming'. People are able to easily determine how similar or\\n> > related these concepts are and a framework that assigns them\\n> > representations of different dimensionality is not flexible enough\\n> > to do this.\\n> \\n\\nThis type of comparison is possible with some existing representations\\n(such as some versions of the one by Socher et al.), but is not\\npossible with other type-driven representations (e.g. the convolution\\ntree kernels by Zanzotto) which purposely encode different parts of a\\ntree in different subspaces to ensure that both syntactic and semantic\\nsimilarity are encoded in one model. But you are right that there are\\ntrade-offs involved in having a fully type-driven semantics.\\n\\n> > Regarding: 'plausibility space to have two dimensions, one\\n> > corresponding to plausible and one corresponding to implausible.'\\n> > This could be simply captured by a single number that is high for\\n> > plausible and low for implausible? In fact, in standard machine\\n> > learning a 2 class softmax (which is being used in this paper) is\\n> > usually collapsed to a single sigmoid unit.\\n\\nWe addressed this question in the response to the first two reviewers.\\n\\n> > Section 5.1 mentions problems with rare words. How could this\\n> > approach be extended to learn representations for rare combinations\\n> > or words?\\n\\nA couple of simple ways of dealing with this problem are to seed the\\ntraining of a new low frequency word by a prototype tensor of similar\\nwords, or by backing off to a tensor representing all words of the same\\ntype.\\n\\n> On a related note, could this approach ever learn representations for\\n> all syntactic types in setups that move beyond trigrams? Presumably, \\n> the\\n> counts for most\\n> such n-tuples will be extremely sparse.\\n\\nWe only use frequent n-tuples to generate plausible examples for this\\ndataset. We don't need highly frequent tuples for training as we only\\never pass each tuple to the learning method once.\"}", "{\"review\": \"Dear Reviewers,\\n\\nThank you very much for your insightful comments. We respond directly\\nbelow to the comments of Reviewer 1, but these also cover most of the\\npoints made by Reviewer 2.\\n\\n> Why map to a two dimensional space, one axis of which models\\n> plausibility, and the other, implausibility? How is the point (1,1)\\n> in this space different from the point (0.5, 0.5) (i.e. what does\\n> simultaneously being 'more plausible' and 'less plausible' mean)? Wouldn't it be more natural, and simpler, to just map to the real\\n> line? (These questions also apply to Clark's paper from which this\\n> idea is inherited.)\\n\\nIn the compositional framework that we base our experiments on\\n(Coecke et al.), the sentence space is represented as a vector\\nresulting in a tensor representation for the verb, so part of the\\nreason we chose an overparameterised representation is to remain\\nfaithful to this framework.\\n\\nBoth reviewers ask why we did not represent the verb as a matrix, with\\nthe sentence space being a scalar. In fact we did try these\\nexperiments, with the verb as a matrix and the output a scalar value\\ntransformed by a sigmoid, resulting in a single-output two-class\\nclassifier. This approach was not as effective as the softmax\\noverparameterised version, and to make the paper more concise we\\nomitted these results. We can add a line or two to the final version\\nto clarify this point.\\n\\n> Also the baseline seems quite weak, as it is constructed from only\\n> positive examples. Furthermore the generalization performance of the\\n> resulting system was split by sentence, not by noun - that is, nouns\\n> that appear in the training set can also appear in the test set,\\n> suggesting other, simpler ways of constructing baselines (see\\n> below).\\n\\nThis baseline was chosen as it has precedent in the paper of\\nGrefenstette and Sadrzadeh (2011), the first paper discussing\\nexperimental approaches for this compositional framework. In general,\\nmethods in distributional semantics typically use positive-only data,\\nand hence we consider this to be a reasonable baseline. Furthermore,\\nwe have implemented the alternative verb-as-a-matrix model, as\\ndiscussed above, and can present results for it if necessary.\\n\\n> It seems a bit of a stretch to call the simple gradient descent\\n> method used here, 'deep learning'.\\n\\nIn the context of this simple experiment with the single logistic\\nregression node, we agree that the claim that this is deep learning is\\noverstated. Nevertheless, the aim of this work is just an initial\\ninvestigation, the ultimate goal of which is to investigate longer\\nsentences whose more complicated sentence structures would necessitate\\nthe use of 'deep learning' techniques with more complex networks.\\n\\n> Clarity: I think it would help the reader's understanding to explain\\n> how the string 'with' in 'eat with a fork' maps to the category\\n> ((SNP)(SNP))/NP.\\n\\nWe will clarify this point.\\n\\n> The problem appears to decompose into very small decoupled\\n> optimization problems (an objective function for each verb) since\\n> the verb parameters are independent and the noun vectors are fixed\\n> (right?). It would have been interesting to allow the noun vectors\\n> to be learned also, perhaps starting from your initial values (but a\\n> much harder optimization problem).\\n\\nThis is an interesting suggestion, and has been explored in related\\nwork (Andreas and Gharamani, 2013) and anecdotally by Socher.\\n\\n> A little more clarification is needed here. From what I understand,\\n> you took, for each noun, the top N most relevant context words, each\\n> of which is also a vector, thus forming the 'noun context\\n> matrix'. But how were the vectors for the context words computed? In\\n> the same way as the nouns, thus giving a 10,000 dimensional vector\\n> for each context word? Also, when you do the SVD, you will wind up\\n> with a matrix representation (constructed from either the left\\n> singular vectors or the right, depending on how the problem was set\\n> up). How do you wind up with a 20 dimensional vector? If you just\\n> used the top 20 singular values, that would seem to throw away\\n> crucial information.\\n\\nWe constructed the vectors in a standard way for distributional\\nsemantics, so we did not describe the method in great detail. The\\ncontext words are just used as indices (or rather basis vectors) of\\nthe target word vectors. Furthermore, using SVD is a standard\\ntechnique to reduce noise and uncover the latent dimensions (e.g. see\\nTurney and Pantel). 20 is perhaps a little low compared with other\\nwork (e.g. Socher uses 50), but with additional techniques including\\ncontext selection and normalisation (see our EACL-14 paper) we can get\\naway with 20 dimensions. We found that adding more dimensions did not\\nconsistently improve results.\\n\\n> There appears to be a flaw in the experimental protocol: no attempt\\n> was made (or at least, mentioned) to keep the nouns that appear in\\n> the test set, different from those that appear in the training\\n> set. If this is so it makes the results weaker, as methods that\\n> leverage such coincidences may do much better. For example, we\\n> could construct a simple baseline for a given N1-V-N2 test triple by\\n> saying that if N1-V appears in the training set, and also if V-N2\\n> does too, then output '1', else output '0'. At least I think it's\\n> important to break results out by such a 'did noun occur in training\\n> set' stratification.\\n\\nWe ensure that the subject-verb-object *triples* are unique, but not\\nallowing any subject or object nouns in the test data to occur in the\\ntraining data we think is too restrictive (and would most likely\\nresult in a small training set). Such a scenario would also be\\nunrealistic when moving to longer sentences.\\n\\nFinally, we wish to stress the point that this paper is a first\\nattempt at implementing the tensor-based compositional framework, with\\na simple sentence space. In future work our intention is to move to\\nmore complex sentences and sentence spaces, for which the learning\\nmethods used here will become more indispensable.\"}", "{\"title\": \"review of Learning Type-Driven Tensor-Based Meaning Representations\", \"review\": \"This paper presents a method for learning 3-modes tensor for representing meaning of transitive verbs. The choice of this representation for verbs is motivated by the Combinatory Categorical Grammar framework; nouns are represented by low-dimensional continuous vectors obtained by a modified SVD on a co-occurrence matrix. Verbs tensors are learned for 10 verbs using subject-verb-object triples extracted from Wikipedia and Google N-grams. Evaluation is carried out by asking the system to predict the verb given a pair (subject, object).\\n\\n---\\n\\nThe paper is clearly written and tackles an important and hot topic. The introduction and section 2 are actually very interesting to motivate this field of research: attempting to combine distributional and combinatorial semantics. However, I find the message a bit too ambitious and misleading w.r.t. the content of the remainder of the paper, and hence its actual contributions. Indeed, the paper ends up by learning representations for 10 verbs (nouns representations being obtained using methods developed in previous work). This might be an important first step, but this is not exactly what the introduction and title indicate.\\n\\nThe sentence of the introduction 'One goal of this paper is to introduce the problem of learning tensor-based semantic representations to the machine learning community' is a bit strong, since there exist many tensor learning work, perhaps not exactly in the same context but still, as well as the paper by Krishnamurthy and Mitchell from last year (cited in the paper). \\n\\nBesides, the learned tensors are quite strange since one slice has only 2 dimensions for predicting plausible/implausible probabilities. What is different from using a single value to predict the plausibility only? This would imply simply representing verbs by matrices, but I wonder if the result would be much different. In test, I assume that the ranking of verbs given pairs of (subject, object) is carried out by sorting using the plausibility probability. The role of the second value destined to encode for implausibility is unclear (and hence the role of tensors with a mode of dimension 2).\\n\\nEven if using such tensors was fully justified, the paper needs to propose a comparison with a baseline encoding verbs as matrices (the current baseline is too weak). For instance, the paper by Jenatton et al. 'A latent factor model for highly multi-relational data' (NIPS 2012) proposes very similar experiments by also training on subject-verb-object extracted from Wikipedia to learn matrices for representing verb meanings. I think this system or an adaptation of it would make a much better baseline.\", \"other_remarks\": [\"Which norm is used in Equation 1? L2-norm for tensors or matrix is ambiguous. How was set lambda?\", \"In Equation 2, f_{w_i , c_j} should be defined.\", \"It seems that there is a single N (and not 1 per noun). This should be made more clear.\", \"Figure 2 is rather unclear. What is K? The size of the noun vectors?\", \"In the beginning of Section 5, for the second experiments shouldn't it be 52 instead of 42?\"]}" ] }
YvgSX22hONWpI
Multimodal Transitions for Generative Stochastic Networks
[ "Sherjil Ozair", "Li Yao", "Yoshua Bengio" ]
Generative Stochastic Networks (GSNs) have been recently introduced as an alternative to traditional probabilistic modeling: instead of parametrizing the data distribution directly, one parametrizes a transition operator for a Markov chain whose stationary distribution is an estimator of the data generating distribution. The result of training is therefore a machine that generates samples through this Markov chain. However, the previously introduced GSN consistency theorems suggest that in order to capture a wide class of distributions, the transition operator in general should be multimodal, something that has not been done before this paper. We introduce for the first time multimodal transition distributions for GSNs, in particular using models in the NADE family (Neural Autoregressive Density Estimator) as output distributions of the transition operator. A NADE model is related to an RBM (and can thus model multimodal distributions) but its likelihood (and likelihood gradient) can be computed easily. The parameters of the NADE are obtained as a learned function of the previous state of the learned Markov chain. Experiments clearly illustrate the advantage of such multimodal transition distributions over unimodal GSNs.
[ "transition operator", "gsns", "markov chain", "multimodal transitions", "generative stochastic networks", "transitions", "alternative", "traditional probabilistic modeling", "data distribution" ]
submitted, no decision
https://openreview.net/pdf?id=YvgSX22hONWpI
https://openreview.net/forum?id=YvgSX22hONWpI
ICLR.cc/2014/conference
2014
{ "note_id": [ "IbETu001doIwY", "7OBiO3szQ4OS5", "hcwAfe0-aRZg7", "4ErBEqulds9aZ", "USe3SVdvf8Pp1" ], "note_type": [ "review", "review", "review", "review", "comment" ], "note_created": [ 1391887320000, 1392750540000, 1391570400000, 1391480160000, 1392750240000 ], "note_signatures": [ [ "anonymous reviewer 6682" ], [ "Yoshua Bengio" ], [ "anonymous reviewer 8572" ], [ "anonymous reviewer 3e78" ], [ "Yoshua Bengio" ] ], "structured_content_str": [ "{\"title\": \"review of Multimodal Transitions for Generative Stochastic Networks\", \"review\": \"Generative Stochastic Networks have recently been introduced as a new generative model formalism. Instead of directly parameterizing the data distribution, GSNs learn the transition distribution for a Markov chain which generates samples from the learned distribution. Previous results on GSNs, namely using the denoising criterion, have used a transition distribution that is factorized or unimodal. This makes it easier to learn the GSN, but limits the expressivity of the model. This paper proposes a type of GSN which uses conditional NADE to represent multi-modal output distributions for the transition operator. Results show it performing better than the uni-modal approach on MNIST and a toy 2d dataset.\\n\\nThe paper is novel. The GSN is a relatively recent formalism, with essentially all work coming from the same research group. This is the first work to propose a multi-modal transition operator. It's also an interesting use of NADE and bears some similarity to the recent use of NADE as an output distribution for recurrent neural networks. It is an interesting paper: well motivated and well written.\", \"pros\": [\"In my opinion, GSNs are a fascinating framework and this paper addresses one of the limitations of previous work\", \"The paper is clear and gives a great deal of background; in summarizing the work to-date on GSNs, it's complimentary to other published work\"], \"cons\": \"* Experiments are limited to a toy dataset (2d - a few hundred (?) examples) and MNIST; the former is ok for visualization and the later is used to compare methods based on an approximation to test log likelihood proposed by the same group\\n* The comparisons aren't very extensive - essentially the model without NADE and the baseline model using 'walkback' training\\n* Although the background and review of GSNs are welcome, more than half of the paper is devoted to background. I feel that the paper makes a good contribution, but not as substantial as other ICLR papers. It could certainly be strengthened by exploring other types of output distribution models beyond NADE or experiments beyond MNIST.\\n\\nComments\\n--------\\n\\nIn the experiments section, I suggest giving a pointer that discussion re: walkback not being applied to GSN-NADE will be addressed later (i.e. in next section).\\n\\nSection 4 refers to a Figure 2, but I believe it should be Figure 1.\\n\\nSection 6 'fatorial' -> 'factorial'\\n\\nEven though the CSL method is referenced, you could give a short description of it in the experiments section.\\n\\nIs R-NADE being used for the 2d dataset and regular NADE being used for MNIST? I didn't see this explicitly stated.\\n\\nFirst sentence on page 7 reads 'On the contrary, GSN-NADE with the walkback training alleviates this issue.' This is confusing, GSN-NADE didn't use walkback training. Should it say 'GSN-NADE without the walkback training?'\"}", "{\"review\": \"We thank the reviewers for the feedback and suggested improvements, which we will incorporate in the revised paper.\"}", "{\"title\": \"review of Multimodal Transitions for Generative Stochastic Networks\", \"review\": \"The authors present work that tackles learning data distributions indirectly, by learning a neural-network based transition operator (a Generative Stochastic Neural Network, GSN), whose stationary distribution is the desired distribution. Specifically, they introduce a GSN where the transition distribution is multimodal. The proposed model is evaluated in two experiments (spiral distribution and generating MNIST) and shown to produce samples that better capture the data distribution.\\n\\nI'm not enough of an expert to give an in-depth analysis of the proposed method, however, I review the paper in general terms.\", \"pros\": \"addressing sampling issues is worthwhile, and I would like to learn more about the proposed approach.\", \"cons\": \"the initial part of the paper is way too wordy. At the same time, details and explanations regarding the model and experimental procedures are missing. There's few actual results; this should have been written as a short workshop paper.\\n\\n\\nThe introduction/motivation section is very long and has some content that is not directly relevant to the paper, or detail that is unnecessary. The authors write over a page to motivate unsupervised learning in general, and to describe well-known facts (Boltzmann machines are difficult to handle; the existence of many, separated modes makes MCMC difficult). This strikes me as too general for the paper and could have been one or two short paragraphs instead. Similarly, what is the benefit of spelling out in full Theorem 1, from an earlier reference? Etc. \\n\\nThe writing should be made more succinct and be structured better. In Section 1, the general introduction of the topic and overview of the paper should be separated. Wording could be improved (e.g., '[...] MAP approximations could all fall on their face.'; or 'In a sense, this is something that we are directly testing in this paper.'--in a sense?). \\n\\nIn Section 5, I take it that the NADE factorization is now over pixels rather than frames? The notation should make this clearer (x_{} is used for both). RNADE was not defined (should be with ref [12]). A figure would have been helpful for understanding the GSN-NADE architecture (and the other models).\\n\\nResults are not presented until Section 6, more than halfway through the paper. Here, the methods used are not properly explained and not enough details are given. \\n\\nIn the first experiment, what are the model parameters and training procedure? For the second experiment, the authors write 'The training of model uses the same procedure as in [13]', with no further explanation. Similarly: 'To evaluate the quality of trained GSNs as generative models, we adopt the Conservative Sampling-based Log-likelihood (CSL) estimator proposed in [9]' and 'The GSN-1-w is [...] trained with the walkback procedure proposed in Bengio et al. [11].' No explanation is given for any of this. The references are all very recent (2013), so it's not like the general reader can be expected to be familiar with these methods. Again, model parameters are missing ('For the details of models being compared, please refer to [9].').\\n\\n\\nIn conclusion, if the authors were to expand the experimental section and better explain the methods used, and cut out unnecessary content, this could make for a workshop track paper. But I don't see enough content for a conference paper.\", \"further_notes\": \"\", \"page_3\": \"'Monte-Carlo Markov Chain (MCMC) methods' -> Markov chain Monte Carlo (MCMC) methods\", \"page_4\": \"'Figure 2' should be Figure 1?\\n\\nIt looks like the paper was updated after the review period had started (24 Jan.). It would be good in such cases to notify reviewers by posting a comment here, to make sure they are reviewing the latest version.\"}", "{\"title\": \"review of Multimodal Transitions for Generative Stochastic Networks\", \"review\": \"Recently, (Bengio et al. 2013b,2013c) proposed a novel way to estimate a data distribution indirectly, by training the transition operator T(x_{t}|x_{t-1}) of a Markov chain instead of trying to estimate the data distribution directly. These Generative Stochastic Networks (GSN) have several theoretical benefits, mainly avoiding the costly process of sampling in a highly multi-modal distribution (either when trying to sample from the model to estimate the so called 'negative gradient update', or when trying to perform MAP inference) as is traditionally required when training highly multi-modal models.\\n\\nThis paper proposes using a Neural Auto-Regressive Density Estimator (NADE) to learn a *multimodal* transitition operator, thus hopefully improving results compared to the above works in which the transition operator is unimodal.\\n\\nThe authors make a very good review of recent advances related to GSNs and argue that using a multi-modal transition operator could allow for better mixing of the markov chain and deal more gracefully with huge numbers of modes.\\n\\nTwo sets of experiments are performed, the first trying to estimate a 'spiral' distribution in 2D (a 2D slice of the traditional Swiss roll seen in manifold learning studies), the second on the Mnist dataset of handwritten digits. These experiments compare GSNs using the proposed multi-modal NADE transition operator with GSNs based on uni-modal transition operators which were used in previous works.\\n\\nThe results show that GSN-NADE fare much better at representing the spiral dataset, but fall a little short of expectations on the Mnist dataset where the walkback trick proposed in (Bengio et al. 2013c) allows a GSN with a uni-modal transition operator to fare better than the proposed GSN-NADE. In other words, the results support the hypothesis that a multi-modal NADE transition operator can be helpful, but using the walkback trick to deal with spurious modes appears to be even more helpful. As a possible future work, the authors propose to try and find a way to combine both benefits since walkback is not trivially applicable to the proposed GSN-NADE.\\n\\nThis study presents a new approach to a problem clearly identified in an earlier study, gives a very good summary of the state-of-the-art, clearly identifies key issues to overcome, and honestly presents its own limitations. Although the results fall a little short of expectations, they do indeed point to the fact that a multi-modal transition operator can be helpful in GSNs.\\nRegrettably the experiments do not try to assess the quality of the representations obtained with GSN-NADE in a classification setting and on more complex datasets than Mnist.\"}", "{\"reply\": \"We thank the review for the thoughtful comments.\\n\\nWe believe that we have found a way to speed-up sampling of GSN-NADE (especially deep NADE) in order to incorporate the walkback algorithm in the training process. The idea is to avoid the lengthy loop over all the pixels by only resampling subsets at a time (according to a GSN framework!).\\n\\nWe have started experiments with this approach. We will add these results to the final version of the paper.\\n\\nThe main contribution however remains that GSNs with a multimodal output distribution can be trained and that they can avoid *modeling* difficulties that can otherwise hurt GSNs with unimodal/factorized output distributions (such as those used in previous papers).\\n\\nWe will also add results on other datasets.\"}" ] }
MMSzYHL_g1V83
Rate-Distortion Auto-Encoders
[ "Luis G. Sanchez Giraldo", "Jose C. Principe" ]
We propose a learning algorithm for auto-encoders based on a rate-distortion objective. Our goal is to minimize the mutual information between the inputs and the outputs of an auto-encoder subject to a fidelity constraint. Minimizing the mutual information acts as a regularization term whereas the fidelity constraint can be understood as a risk functional in the conventional statistical learning setting. The proposed algorithm uses a recently introduced measure of entropy based on infinitely divisible matrices that avoids the plug in estimation of densities. Experiments using over-complete bases show that the auto-encoder learns a regularized input-output map without explicit regularization terms or add-hoc constraints such as tied weights.
[ "fidelity constraint", "learning algorithm", "objective", "goal", "mutual information", "inputs", "outputs", "subject", "mutual information acts", "regularization term" ]
submitted, no decision
https://openreview.net/pdf?id=MMSzYHL_g1V83
https://openreview.net/forum?id=MMSzYHL_g1V83
ICLR.cc/2014/conference
2014
{ "note_id": [ "iBE1B8FLXVYTa", "jyk6ZpLAXVZNe", "W06vWEntsKUx4" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1391459640000, 1391857440000, 1391498460000 ], "note_signatures": [ [ "anonymous reviewer b49b" ], [ "anonymous reviewer a728" ], [ "anonymous reviewer f2d6" ] ], "structured_content_str": [ "{\"title\": \"review of Rate-Distortion Auto-Encoders\", \"review\": \"This paper proposes to regularize auto-encoders by minimizing the mutual information between input and output. The minimization of mutual information is based on an alternative definition of entropy (Sanchez et al. 2013). Although auto-encoders have been around for more than 20 years, the introduction of deep learning (Hinton et al. 2006,Bengio et al. 2006) has renewed the interest in these models and their regularization schemes (Vincent et al. 2008), as they can be stacked to achieve state-of-the-art performance.\\n\\nA first contribution establishes a link between the proposed model (rate-distortion auto-encoders) and PCA in the case of a Gaussian input variable but does not discuss the already established proof that traditional linear auto-encoders are equivalent to PCA (Baldi and Hornik;1989).\\n\\nThe authors then derive a gradient training procedure for the rate-distortion auto-encoder and report the result of two experiments in dimension 2, namely they compare the input to the reconstruction :\\n1 - In the case where the input is a specific Gaussian variable.\\n2 - In the case where the input is a specific mixture of 3 Gaussian variables.\\n\\nfinally the authors conclude that the reconstruction approximately fits the input distribution.\\n\\nAlthough the use of rate distortion theory to regularize an auto-encoder is new, the paper suffers from several issues. First, the authors do not define the problem which they are trying to address. They present a cost function and their goal is to minimize it. To what end ? Accordingly, it is unclear what the authors are trying to prove in their experiments: a regularization property ? Since the end goad is not clearly defined, why is regularization important at all ? Additionally, the experiments are very insufficient as they only consider two very simple artificial datasets: the first with a single predefined Gaussian, and the second with three Gaussians. In these experiments, the proposed method is not compared to any other model or baseline.\", \"pros\": \"\", \"cons\": [\"Goal is not defined.\", \"experiments are in a very low dimensional space (dim=2) which is not very relevant for auto-encoders.\", \"experiments do not compare the proposed model to a baseline (traditional auto-encoder) or to other forms of regularized auto-encoders (e.g. denoising auto-encoders or contracting auto-encoders).\", \"experiments do not report any quantitative measure of performance (e.g. log-likelihood or classification accuracy).\"]}", "{\"title\": \"review of Rate-Distortion Auto-Encoders\", \"review\": \"* Brief summary of paper:\\nThe paper proposes a novel kind of regularization for learning autoencoders that is rooted in rate-distortion theory.\\nThe regularization term is a kernel-based estimator of entropy (of the reconstructed data) proposed by the authors at last year's ICLR.\\nExperiments on 2D toy data show that it works as expected.\\n\\n* Assessment:\\n\\nThe rate-distortion approach to autoencoders is interesting, and I beleive novel. I would however refrain from stating, as written in the paper, that it allows learning autoencoders without an explicit regularization terms: the conditional entropy clearly plays the role of a data-dependent regularization term (similar to most alternative approaches for learning overcomplete autoencders: they also don't define an explicit penalty on the parameters). \\n\\nThe kernel-based entropy estimator seems however computationally very expensive, since it requires full eigendecomposition of Gram matrices. \\n\\nThe main weakness of the paper is the very limited experimental evaluation of the method (2D toys), and the lack of comparison with any other regularized autoencoder approach, not even a discussion of what this new mathematically sophisticated and computationally heavy approach might offer as benefits. Similarly, the authors only use their own gram-matrix-based entropy estimator: a brief discussion of properly referenced alternative, more classical, nonparametric entropy estimators would have been in order (and ideally with experimental comparison). \\n\\nThe paper is mostly well written. There is however a confusing proabably unintended notational shift happening after Eq. 14. In eq 14 and in ghe previous sub-section you use S_alpha, and in what follows you use H(K) which has nowhere been formally defined or related to S_alpha. Also where does the N arise from in eq 16. Please clarify.\\n\\nLastly I have a question/remark: conditional entropy H(X|X^) could also be rewritten as \\nH(X|X^) = H(X) + H(X^|X) - H(X^)\\nNow since you are concerned with a deterministic autoencoder, X^ is a deterministic function of X, so it would seem that H(X^ | X) is a constant.\\nSo it seems you might as well penalize only H(X^) rathe than trying to estimate and penalize the actual conditional entropy. Does this reasoning seem valid? \\n\\n\\n* Pros and cons:\", \"pros\": [\"original approach to autoencoder regularization\"], \"cons\": [\"computationally very heavy approach\", \"very limited experimental evaluation (toy 2d data)\", \"lack of comparison (neither in discussion nor experimental) with anything related\"]}", "{\"title\": \"review of Rate-Distortion Auto-Encoders\", \"review\": \"This paper proposes a new criterion to train auto-encoders: minimizing the\\nmutual information (MI) between the input and output distributions, under the\\nconstraint that the output is close enough to the output. This constraint can\\nbe seen as the 'risk' we want to keep small, while the minimization of the MI\\nadds regularization to prevent overfitting (i.e. learning an identity mapping).\\n\\nThis seems to be an interesting direction to investigate, however I find the\", \"submitted_paper_to_fall_short_in_two_important_areas\": \"1. The motivation and intuition behind this algorithm are not very clear. At\\nthe end of section 2, we do see that minimizing the MI 'can have the effect of\\nlowering the entropy of the output variable and thus, we can think of the\\nmapping f as a contraction' but that does not really explain what kind of\\nproperties we can expect when minimizing eq. 2. For instance, if we wanted to\\nlower this entropy, why not just do that directly? Another point that confuses\\nme is that the problem is initially stated as a constrained optimization one,\\nbut if I understand correctly, the actual algorithm is performing gradient\\ndescent on the Lagrangian (eq. 8): this Lagrangian is the sum of two terms, one\\nbeing the risk to minimize, and one being the regularization, and the parameter\\nmu gives the trade-off between the two. Now my question is: why not start\\ndirectly from this criterion (which I personally find more intuitive) instead\\nof the constrained optimization formulation (whose added value is not obvious\\nto me)?\\n\\n2. The experiments are extremely limited, being run only on two 2D toy datasets \\nand without any comparison to other popular auto-encoder algorithms. I also\\nfeel they are not enough to give additional intuition on the algorithm's\\nbehavior. Here are some examples of topics which could have been investigated:\\n- How does the output change with mu?\\n- How does the output change with different kernels used in the entropy \\n estimations?\\n- How does the algorithm behave on real data?\\n- How does the algorithm behave as dimension increases? (when the data do not \\n lie on a low-dimensional manifold, local kernel methods tend to fail)\\n- How does the algorithm behave compared to the typical auto-encoders mentioned \\n in the introduction? (the goal would not necessarily be to show it works\\n better, but to gain understanding of the differences between algorithms)\\n\\nOverall, an interesting idea, but one that would deserve a more in-depth \\ntreatment (note that novelty seems limited, since the starting point is a\\ncriterion already proposed for manifold learning, and the kernel-based entropy\\nestimation comes from a previous paper by the same authors).\", \"a_few_more_small_points\": \"- There is a non negligible amount of typos, it could use a proofread pass.\\n- Notation inconsistencies (or not well explained): D instead of d in intro\\n (not defined by the way), using both hat{x} and \\tilde{x} to denote the\\n reconstructed data (and hat{x} is also used in the intro to denote the noisy\\n input in the denoising auto-encoder), multiple P's in eq. 1, not clear if we\\n are working in a continuous (eq.1) or discrete (eq. 3) space, h not defined in\\n 1st paragraph of 2.1.\\n- The manifold learning algorithm described in Section 1 is not very clear. Eq.1\\n seems to be only part of the cost, and it is not clear if it is minimized\\n or maximized. Unfortunately I did not have time to read the corresponding\\n reference, but I feel like it may deserver a more thorough description, given\\n that it seems to be key to the algorithm presented here.\\n- Greek letters cannot be seen on some PDF readers (like an iPad), although it\\n works under Windows.\"}" ] }
z6PozRtCowzLe
Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines
[ "nan wang", "Laurenz Wiskott", "Dirk Jancke" ]
Spontaneous cortical activity -- the ongoing cortical activities in absence of sensory input -- are considered to play a vital role in many aspects of both normal brain functions and mental dysfunctions. We present a centered Gaussian-binary deep Boltzmann machine (GDBM) for modeling the activity in visual cortex and relate the random sampling in DBMs to the spontaneous cortical activity. After training on natural image patches, the proposed model is able to learn the filters similar to the receptive fields of simple cells in V1. Furthermore, we show that the samples collected from random sampling in the centered GDBMs encompass similar activity patterns as found in the spontaneous cortical activity of the visual cortex. Specifically, filters having the same orientation preference tend to be active together during random sampling. Our work demonstrates the homeostasis learned by the centered GDBM and its potential for modeling visual cortical activity. Besides, the results support the hypothesis that the homeostatic mechanism exists in the cortex.
[ "visual cortex", "centered", "random", "correlations", "spontaneous activity", "spontaneous cortical activity", "deep boltzmann machines", "ongoing cortical activities", "absence" ]
submitted, no decision
https://openreview.net/pdf?id=z6PozRtCowzLe
https://openreview.net/forum?id=z6PozRtCowzLe
ICLR.cc/2014/conference
2014
{ "note_id": [ "UXdhX88QhJXqm", "9KMB9CLsoTqnD", "11cPC2SrPw-0t", "xx3F9LbpiTMOy", "CpiP7SSu0QN4Z", "qqASqbr5SEqQm", "ArnEA1TfXQrHh", "3L2EL04lEjLwS", "-unS-OOvqFPT3" ], "note_type": [ "review", "review", "comment", "review", "comment", "review", "review", "comment", "review" ], "note_created": [ 1391114880000, 1394183340000, 1392672720000, 1391970120000, 1392672840000, 1391823840000, 1392671280000, 1392671460000, 1392779640000 ], "note_signatures": [ [ "anonymous reviewer 9287" ], [ "anonymous reviewer 9287" ], [ "nan wang" ], [ "anonymous reviewer 299e" ], [ "nan wang" ], [ "anonymous reviewer 8f00" ], [ "nan wang" ], [ "nan wang" ], [ "nan wang" ] ], "structured_content_str": [ "{\"title\": \"review of Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines\", \"review\": \"The authors use a Gaussian-binary deep Boltzmann machine (GDBM) to model aspects of visual cortex. Training of the GDBM involves the centering trick of [8]. The comparison to visual cortex is in terms of learned receptive field properties, and by reproducing experimentally observed properties of activity in V1 as reported by [1]. In particular, these findings relate spontaneous activity in the absence of external stimulation to evoked activity.\\n\\nOn the positive side, I think the issue of the nature of spontaneous activity is interesting, and the authors put effort into reproducing the experimental findings of [1]. On the negative side, the significance of the main contributions seems lacking to me, and the authors need to motivate better why their work is important or relevant. Quality and clarity need to be improved in several points as well.\", \"details\": \"To expand on the above, I'll discuss three main points: 1) The centered GDBM. 2) Reproducing aspects of visual cortex. 3) The connection to homeostasis.\\n\\n1) I wouldn't see this part as a major contribution of the paper. The centering trick was originally applied to a fully binary DBM. Applying the same trick to the GDBM (which only differs in having a Gaussian visible layer) seems a very natural thing to do. Moreover, there is no further analysis on the efficacy of the centering trick. The authors say that centering makes the GDBM easy to train compared to [15], but they don't actually evaluate the performance of the GDBM (other than comparing it to biological phenomenology), and only apply it to image patches. In [15], the GDBM was trained on images of faces and evaluated in terms of filling in missing parts of the image. Hence, it is not clear whether centering makes the more complicated training of [15] obsolete, as suggested by the authors.\\n\\nAlso, when it comes to clarity: given that the authors emphasize the importance of centering, they need to better explain what it is, why it works, and how the centering parameters are computed (the latter is only described in the algorithm float). These things are unclear to the reader unless they look at the reference.\\n\\n\\n2) Reproducing aspects of visual cortex is the main contribution of the paper. First, the learned receptive fields are suggested to qualitatively resemble those in V1 and V2. Learning V1-like Gabor filters is of course a quite common result nowadays. I don't think it's possible to conclude that the model captures receptive field properties V2 well, just from looking at Figure 1b.\\n\\nHence, the main contribution here is the analysis of activity. I think the results are fine and somewhat interesting, though not surprising. What is missing is better motivating/explaining why these results are interesting/relevant. Sure, the GDBM reproduces certain experimental findings. But have we learned something new about the brain? Is the GDBM particularly well suited as a general model of visual cortex? Does the model make predictions? What about alternative, perhaps simpler models that could have been used instead? Etc.\\n\\nAlso, are there related models or theoretical approaches to spontaneous activity? For example, this comes to mind:\\n\\nBerkes, P., Orb\\u00e1n, G., Lengyel, M., & Fiser, J. (2011). Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science (New York, N.Y.), 331(6013), 83\\u20137. doi:10.1126/science.1195870\", \"as_for_clarity\": \"when collecting the spontaneous frames, presumably you are sampling from the full model distribution P(x,y,z) via Gibbs-sampling, with all layers unclamped, correct? Because writing that samples were collected from P(y|x,z) seems to suggest that you clamp to a specific x and z and collect multiple samples from the same conditional (not just as a step during Gibbs sampling).\\n\\n\\n3) Lastly, the connection between homeostasis and the author's model is unclear. The authors mention homeostasis in the abstract, introduction and discussion, but do not explain homeostasis or how their results relate to it specifically. This needs to be explained better, especially to this audience. Personally, I know the literature somewhat (e.g. [4]), but am nevertheless unclear about what exactly the authors intend to say. \\nSentences such as 'we are able to make the model learn the homeostasis from natural image patches' are unclear.\\n\\nWhen I first read the paper, I thought the authors were referring to the centering trick as something that can be understood as a homeostatic mechanism (i.e., a neuronal mechanism that maintains a certain average firing rate, see [4], [6,7]), which would make sense to me. However, on further reading it seems to me that the authors refer to the fact that spontaneous activity resembles evoked activity as an aspect of homeostasis? Why? For comparison, [6,7] clamped the input to empty images (to simulate blindness), and had an active homeostatic mechanism at play that led to spontaneous activity resembling evoked activity. In the authors' paper, spontaneous activity resembles evoked activity simply because the former is taken to be sampled from the unconditioned model distribution..? I'm not sure where homeostasis, i.e. being subject to an active self-regularity mechanism, comes into play at this point.\\n\\n(As an aside, I don't think the Friston reference [3] clears up what the authors' notion of homeostasis is, in particular as Friston talks about states of agents in their environments. Frankly, Friston's theoretical claims are often unclear to me, to say the least, in particular when it comes to mixing together internal models in the brain, and methods that should apply to the latter such as variational inference, and external probabilistic descriptions of agents and environments. Either way, if the authors would like to make a connection to Friston's theories in particular, then that connection should be explained better. Generative/predictive brain models and homeostatic mechanisms per se are not exclusive to Friston's theory.)\", \"further_comments\": [\"Abstract, 'Spontaneous cortical activity [...] are' -> is\", \"2.1 first para, 'consisted' -> consisting\", \"Not sure why x,y,z are sometimes capitalized, sometimes not.\", \"3.1 'sparse DBNs show worse match to the biological findings as reported in [1]. A quantitative comparison between centered GDBMs and sparse DBNs is still open for future studies.' Where was it shown to be a worse match then?\", \"Figure 2: shouldn't the angles (figure titles) go from 0 to 180?\", \"In conclusion, I think this work could potentially be interesting, but in its current form quality and clarity are somewhat lacking and significance is not quite clear.\"]}", "{\"review\": \"I have read the authors' replies and skimmed the revised paper. The authors put some effort into improving and streamlining the paper (though the writing has issues, see below). Some main issues such as lack of significance remain.\\n\\nLike the other reviewers, I am not convinced that sampling from the model freely, without conditioning the visibles to some value, captures the experimental setup of [1]. That this is necessary to reproduce the relevant aspects of spontaneous activity appears to speak against the capability of the authors' model to capture the situation. Normatively, I don't see why spontaneous activity under lack of input should resemble the prior/model distribution, without further assumptions, such as active homeostatic mechanisms. Note that in the work of Berkes et al. that I mentioned earlier, they get around this by positing an additional global contrast variable that effectively renders the feature variables unobserved in darkness (they only mention this important aspect in the supporting material!).\", \"in_this_context\": \"'the authors reported to make the same findings during presenting a uniform gray screen to the cats as having the cats in a darkened room.'\\n\\n..but presumably the GDBM would *not* reproduce the results conditioned on a gray image as input?\\n\\nFinally, the revision of the paper has unfortunately introduced a lot of grammar issues as well as some typos, mostly wrong singular/plurals and missing/inappropriate articles (examples: 'expectated'; 'a brain learn to synthesize'; l23 of algorithm 1, y_model should be y_data). The paper should be proofread properly in case of its acceptance.\"}", "{\"reply\": \"Thanks for your helpful comments.\\n1) \\n1.1) \\u201cReceptive fields that resemble V1 and V2 (only a qualitative statement is made)\\u201d\\n\\nRemoved the statement suggesting that the model captures receptive field properties in V2. Please refer to the detailed discussion in the response to common comments 2).\\n\\n1.2) \\u201cWhy are any of these results unexpected? \\u2026 why is this model unique to the finding, or what does this teach me about the brain, or about the model?\\u201d\\n\\nClarified the contribution and conclusion of our work. See more details in the responses to common comments A). \\n\\nWe agree that a GDBM as a probabilistic model is expected to capture the statistical relationships in natural scene statistics. However, there two points we want to make here. \\n\\ni) The patterns found in the spontaneous visual cortical activity show similar properties as the prior distribution of the trained GDBMs, which has been mainly demonstrated in our paper. This suggests that a brain might learn to generate expectations of internal states as a generative model. \\n\\nii) By comparing different models in modeling the spontaneous activity and different sampling procedures (, which is added in section 4 of the revised version), we suggest that the observed spontaneous activity patterns needs both bottom-up and top-down interactions. To be specific, we found that neither GRBMs nor DBNs can match the biological experiments as well as GDBMs. Besides, by clamping either the visible units or the second-layer hidden units to the centering offsets, we could not observe any activity patterns in the spontaneous frames even in the GDBMs. See more details in the responses to common comments E).\\n\\n2) \\u201cAnother concern, is that the main result in Figure 3 is not very convincing.\\u201d\\n\\nClarified the purpose we plot Figure 4 (used to be Figure 3 in previous version) is to demonstrate that the spontaneous frames of centered GDBMs encompass similar properties as the spontaneous activity found in [1]. We wrote in section 3.2.3 of the revised version that \\u201cTo establish the similarity between the single-condition maps and the spontaneous frames, we calculated the spatial correlation coefficients between them.\\u201d\\n\\n2.1) \\u201cIs there any statistical test to show the difference between the distributions in 3a? Why is the uncorrelated noise distribution the proper control?\\u201d\\n\\nIn Figure 4a, we have chosen the random generated activity patterns in order to show the correlations between the spontaneous frames and the orientation maps are stronger than expected by chance. We used t-test for checking the significance of the correlations, i.e. the threshold of significant correlation was chosen to be 0.182 (P<0.01) as described in the text. It can be seen in Figure 4a that the correlation coefficients between the spontaneous frames and the random generated patterns rarely exceed this threshold.\\n\\nWe added the description of statistical details of the distributions in Figure 4a and wrote '... the max correlation coefficient is 0.50+/-0.06, whereas the correlation coefficients between the spontaneous frames and the random generated pattern seldom reach 0.2'. \\n\\n2.2) \\u201cFor Figure 3b, the max (solid blue line) looks pretty close to the relative occurrence line (solid red line), while the neural data seems to have larger divergence in these two measures. What is the important characteristic of this curve? Is there anything surprising about it? It peaks at 0 and 90, just like the neural data, but what makes this model unique to capturing this phenomenon?\\u201d\\n\\nAs for Figure 4b, we followed the convention in the Figure 3b in [1]. As in [1], the purpose is to show that i) states corresponding to cardinal orientations emerged with larger correlation coefficients than oblique ones, ii) the former emerged more often than the latter. As for the exact shape of the curves, it was highly variable as reported in [1]. Therefore, we argue that the spontaneous frames from the GDBMs show the same properties as the spontaneous activity in early visual cortex.\\n\\nWe added more interpretations of Figure 4b and wrote \\u201cThe results match those from the cats; visual cortex in [1] fairly well, i.e. the spontaneous frames corresponding to the cardinal orientations emerged more often than those corresponding to the oblique ones and the former also have larger correlation coefficients.\\u201d\\n\\n3) \\n3.1) \\u201cI find it most interesting that certain models show the observed behavior, while others do not (footnote 6). In fact, I would have preferred to see the entire paper revolve around this finding.\\u201d\\n\\nWe extended the footnote #6 into a new paragraphs in the discussion part to describe briefly the results from different models. Please refer to the common responses E). However, as for the detailed comparison of the results from different models, we find that they do not fit to the logic flow of current version and might exceed the page limits. Therefore we decide not to include them in the current ICLR version. But thanks for your tips. We will consider to include more details in the future.\\n\\n3.2) \\u201cAlso, your title seems to indicate that a \\u2018centered Gaussian-binary DBM\\u2019 is necessary to fit the findings.\\u201d\\n\\nBy the current title, we want to emphasize two points, i) the centering is useful for training the proposed model, ii) the GDBM is a meaningful model for the spontaneous activity in early cortical visual areas. After revising the manuscript, we think the title fits the paper.\\n\\n4) \\u201cI am very confused about your use of the term \\u2018homeostatic\\u2019 '.\\n\\nRemoved the term \\u201chomeostasis\\u201d to avoid misunderstanding. See details in the response to the common comments D).\\n\\n5) \\u201cWhy, in your model, isn't the spontaneous activity condition a situation where the input is black (or all zeros)? Isn't this the 'stimulus' condition that resembles the experiment? How should we model closing the eyes, going into a pitch black room, or severing the input from retina to LGN (or LGN to V1)?\\u201d\\n\\nClarified the reason why we do not clamp the visible units to zero in the discussion part. We argue that it is not necessary to constrain the visible units to be inactive during modeling the spontaneous activity. Briefly, Spontaneous activity is the ongoing activity in the absence of intentional sensory input. Biologically, it is not necessary to cut off the bottom-up stimulus input [22] as in [6, 7]. See details in the response to the common comments C).\\n\\n6) \\u201cWhy did you use mean-field to approximate the response to the stimuli?\\u201d\\n\\nClarified we used mean-field estimation to estimate the response in order to be consistent with the way of estimating data-dependent term during learning. We have tried to use Gibb sampling in both cases, but we didn't observe any significant differences in our results. \\n\\n7) \\u201cI would prefer less detail on already published models and training algorithms, e.g. section 2.1 and Algorithm 1. and more detail on the experiment that you ran (sampling procedure, and explaining better the background of Figure 3b).\\u201d\\n\\n7.1) Although GDBM and centering have been published before, to our knowledge, it is first time to combine them together. Nevertheless, the formulas varies from the previous publication at several points, therefore we think it is reasonable and necessary to include the details of both the model and the algorithm. \\n\\n7.2) Added more details of the samplings procedure in the section 3.2.1 and 3.2.2. An illustration of the sampling procedures is added in Figure 1.\\n\\n7.3) Added more interpretations of Figure 4. Please refer to 3) for details.\"}", "{\"title\": \"review of Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines\", \"review\": \"This paper revisits on a simulated model learned from data, previous work by Kenet et al. 2003 in the journal Nature, on the statistics of spontaneous neural activities in the visual cortex, in particular, their correlations to orientation maps. The authors first learn to model image patches in an unsupervised fashion. In a second step, the spontaneous hidden activities produced by the learned model are recorded and compared to the ones observed by Kenet et al. 2003.\\n\\nThe authors build on the centered deep Boltzmann machines, adapting them to real-valued Gaussian-like inputs. An important aspect of the solution learned by a DBM is the amount of sparsity. Authors do not use an explicit sparse penalty, but biases are shown in Algorithm 1 to be initialized to -4, suggesting that sparsity is still important.\\n\\nActivation patterns produced by the proposed model are shown to be similar to measurements by Kenet et al. 2003 on real neural activity. The use of an unconstrained Gibbs sampler to compute spontaneous activities is questionable, as to my understanding, input should be deactivated. A more powerful result would be to show similar activation patterns when the input units are explicitly constrained to be inactive. Conditional DBMs might be useful in this regard.\"}", "{\"reply\": \"Thanks for your helpful comments.\\n\\n1) \\u201cAn important aspect of the solution learned by a DBM is the amount of sparsity. Authors do not use an explicit sparse penalty, but biases are shown in Algorithm 1 to be initialized to -4, suggesting that sparsity is still important.\\u201d\\n\\nWe agree that initializing the biases to -4 potentially encourage the sparse representation in the hidden layers. However, the claim that the learned representation is sparse needs further experiment supports. \\n\\n2) \\u201cThe use of an unconstrained Gibbs sampler to compute spontaneous activities is questionable, as to my understanding, input should be deactivated. A more powerful result would be to show similar activation patterns when the input units are explicitly constrained to be inactive.\\u201d\\n\\nWe argue that it is not necessary to constrain the visible units to be inactive during modeling the spontaneous activity. Briefly, Spontaneous activity is the ongoing activity in the absence of intentional sensory input. Biologically, it is not necessary to cut off the bottom-up stimulus input [22] as in [6, 7]. See details in the response to the common comments C).\\n\\n3) \\u201cConditional DBMs might be useful in this regard.\\u201d\\n\\nSorry for my limited knowledge. Could you please send me a link to the work on \\u201cconditional DBMs\\u201d?\"}", "{\"title\": \"review of Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines\", \"review\": \"In this paper the authors train a DBM model on natural image patches, conduct a pseudo-physiology experiment on the model, and show that the spontaneous correlation structure of the model resembles that in a V1 experiment.\", \"the_claimed_results_of_the_paper_are\": \"+ Receptive fields that resemble V1 and V2 (only a qualitative statement is made)\\n+ Correlation structure that resembles spontaneous activity\\n\\nMy biggest concern relates to the question, 'Why are any of these results unexpected?' The V1 and V2 receptive field results have been previously reported (ad nauseam for V1), and are not the focus of the paper, so I will ignore these. The correlation structure also appears to be a result entirely expected by the model construction. Isn't it the case that the DBM should be capturing statistical relationships in natural images and that co-linearity (and feature non-orthogonality, or high inner product) is a well known property of natural scene statistics? Taking the results at face value, why is this model unique to the finding, or what does this teach me about the brain, or about the model?\\n\\nAnother concern, is that the main result in Figure 3 is not very convincing. Is there any statistical test to show the difference between the distributions in 3a? Why is the uncorrelated noise distribution the proper control? For Figure 3b, the max (solid blue line) looks pretty close to the relative occurrence line (solid red line), while the neural data seems to have larger divergence in these two measures. What is the important characteristic of this curve? Is there anything surprising about it? It peaks at 0 and 90, just like the neural data, but what makes this model unique to capturing this phenomena?\\n\\nSome of the most interesting statements in the paper are given in the footnotes or stated as ongoing work. For example, I find it most interesting that certain models show the observed behavior, while others do not (footnote 6). In fact, I would have preferred to see the entire paper revolve around this finding. Far too many papers of this type propose one and only one model to account for the data. The advancement of this type of work requires the comparison of models and the invalidation of models. Also, your title seems to indicate that a 'centered Gaussian-binary DBM' is necessary to fit the findings.\\n\\nI am very confused about your use of the term 'homeostatic'. You seem to refer to homeostatic as the condition where you sample from your model with the input unclamped. However, in biology homeostasis is the regulation of variables (or machinery) such that some state is held fixed. I thought that homeostasis in your model would be more related to learning, i.e. weight updates.\\n\\nWhy, in your model, isn't the spontaneous activity condition a situation where the input is black (or all zeros)? Isn't this the 'stimulus' condition that resembles the experiment? How should we model closing the eyes, going into a pitch black room, or severing the input from retina to LGN (or LGN to V1)?\\n\\nWhy did you use mean-field to approximate the response to the stimuli? This means for one condition you run mean-field and the other you run sampling. It seems that this just introduces discrepancy between the conditions.\\n\\nA note on exposition, I would prefer less detail on already published models and training algorithms, e.g. section 2.1 and Algorithm 1. and more detail on the experiment that you ran (sampling procedure, and explaining better the background of Figure 3b).\"}", "{\"review\": \"=====Responses to the common comments from the Reviewers=====\\nA) Main contribution/importance of this work\", \"clarified_in_the_conclusion_that_the_contribution_of_this_work_has_two_folds\": \"i) We applied centering trick to GDBMs and show empirically that centered GDBMs can be properly trained without layer-wise pretraining.\\n\\nii) We mainly demonstrate that the GDBM is a meaningful model approach for basic receptive field properties and the emergence of spontaneous activity patterns in early cortical visual areas. We reproduced many aspects of the spontaneous activity with proposed model. Our results support hypotheses assuming that the brain learns to generate expectations of internal states as a generative model. The patterns of the spontaneous activity serve as such expectations. Furthermore, we gave some further insights into spontaneous activity patterns found in [1]. Please refer to E) for more details. \\n \\nB) The statement of V2-like features in trained GDBMs\\nRemoved the statement suggesting that the model captures receptive field properties in V2. \\n\\nHowever, we want to point out that the hidden units in the second hidden layer have the tendency to have strong connections to similar features in the first layer. We demonstrate that both hidden layers of the centered GDBMs are well trained and the centered GDBMs do not suffer from the problem reported in [9] in which the higher layers were not able to learn meaningful features.\\n\\nC) The sampling procedure used to generating spontaneous frames\\nClarified sampling methods used in both generating orientation maps and collecting spontaneous frames. An illustration of the sampling procedures used in this work has been added in Figure 1. We have included more details of the sampling procedure in section 3.2.1 and 3.2.2.\\n\\nClarified the reason why we do not clamp the visible units to zeroes, which is equivalent to clamp the units to the centering offsets in centered GDBMs, in the Discussion part. Briefly, there are two reasons. \\n\\ni) Spontaneous activity is the ongoing activity in the absence of intentional sensory input. Biologically, it is not necessary to cut off the bottom-up stimulus input [22] as in [6, 7], where the authors used DBMs to model the cortical activity with visual impairment or blindness. In [1], the authors reported to make the same findings during presenting a uniform gray screen to the cats as having the cats in a darkened room. \\n\\nii) From the perspective of modeling, we want to estimate samples from the model\\u2019s prior distribution over the first-hidden-layer units P(Y) by collecting from Gibbs sampling on the P(X, Y, Z). This sampling procedure bring us the samples that can be considered as the model\\u2019s expectation of Y without any bottom-up or top-down knowledge from X and Z. \\nOur results suggest that both the bottom-up and the top-down inferences are critical in generating the presented spontaneous activity patterns. By setting the visible units to zeroes, a two-layer GDBM is equivalent to a restricted Boltzmann machine and show worse match to the biological findings in [1]. This is also the prediction we have included in the revised version, i.e. the observed spontaneous is a result of the interactions between the incoming stimulation and feedback from higher areas. Please refer to E) for more details. \\n \\nD) The misleading usage of \\u201chomeostasis\\u201d\\nRemoved the term \\u201chomeostasis\\u201d to avoid misunderstanding. \\n\\nWe agree that the term 'homeostasis' as used could be misleading as the term is nowadays often used in structural plasticity approaches. By 'homeostasis', we mean in a very general sense that the stable status of the internal milieu in the brain as suggested in the free-energy principle. To our understanding, Friston suggests two different environments, i.e. external and internal milieu, and the biological agents are supposed to keep the homeostasis in both cases. Please refer to the following reference for more details:\\nKarl Friston. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 1-12, 2010. \\nIn current DBM frameworks, we lack the 'action' to interact with the external environment. Still, to learn generating a proper expectation of internal states appears a possible way to keep homeostasis of the internal milieu in the face of constantly changing sensory inputs. In contrast, [6, 7] refer 'homeostasis' as to homeostatic plasticity (see above), i.e. neurons keep their firing rate at a certain level by adapting its bias term during inference. This can be considered as another alternative to keep homeostasis in the theory of free energy principle. Nevertheless, it is an interesting direction to point to the connection between GDBM and homeostatic plasticity in the context of the, admittedly very general, free-energy principle.\\n\\nE) Comparisons between different models for modeling spontaneous activity\\nAdded descriptions of the experiment results with different models used on trial together with the further interpretations. \\n\\nIn our experiments, we compared different models (including GDBMs, GRBMs and DBNs) in modeling the spontaneous activity and found that only GDBMs can match the biological experiments faithfully. The maximum correlation coefficient between spontaneous frames and the orientation maps was 0.50+/-0.06 in GDBMs. In comparison, other models seldom reached 0.2. Furthermore, less than 2% spontaneous frames were significantly in other models, compared with 18+/-9% in the case of GDBMs. \\n\\nA possible explanation is that GDBMs use both top-down and bottom-up interactions during inference. In comparison, GRBMs and DBN use either the bottom-up or the top-down inference, in which the samples collected from the model distribution do not show similar activity patterns as found in the spontaneous cortical activity. Besides, by clamping either the visible units or the second-layer hidden units to the centering offsets, we did not observe the reported activity patterns in the spontaneous frames from GDBMs. These results suggest that the spontaneous activity in visual cortex is a result of interaction between the incoming stimulation and feedback from higher areas.\"}", "{\"reply\": \"Thanks for your helpful comments.\\n1) \\u201cThe centered GDBM.\\u201d\\n\\nClarified applying centering to Gaussian DBM is an extension of the previous work in [8]. The point we want to make here is that GDBM with centering can be trained without the pre-training procedure as described in [3]. In comparison, we followed the same setting in [3] with centered GDBM. The reconstruction error here is 41.6 +/- 0.40 compared to the results of about 40 in [3]. We agree that the minor differences are not sufficient to claim the advantages of centered GDBMs over the non-centered version. (This is also why we did not include this result in the previous version.) Importantly however, as shown in [8], centering leads to improved conditions. In our paper, we show that hidden units in both layers of the centered GDBM can learn meaningful features. Thus, our present results also show empirically that the centering helps to overcome the commonly observed difficulties during training of GDBMs. \\n \\nWe have added more details about the centering in the revised version. Because the main focus of this paper is to illustrate the capacity of GDBM in modeling spontaneous cortical activity, we did not include further analyses of the centering specifically for GDBMs. \\n\\n2) \\u201cReproducing aspects of visual cortex\\u201d\\n2.1) \\u201cI don't think it's possible to conclude that the model captures receptive field properties V2 well, just from looking at Figure 1b.\\u201d\\n\\nDone - We removed the statement suggesting that the model captures receptive field properties in V2. See more details in the responses to the common comments B). \\n\\n2.2) \\u201cWhat is missing is better motivating/explaining why these results are interesting/relevant. \\u2026. have we learned something new about the brain? Is the GDBM particularly well suited as a general model of visual cortex? Does the model make predictions? What about alternative, perhaps simpler models that could have been used instead? Etc. Also, are there related models or theoretical approaches to spontaneous activity?\\u201d\\n\\nWe mainly demonstrate that the GDBM is a meaningful model approach for basic receptive field properties and the emergence of spontaneous activity patterns in early cortical visual areas. Compared to other models for modeling spontaneous cortical activity, GDBMs (or DBMs in general) are not limited to simple, low-dimensional, non-hierarchical variables [7], but extend to generative, hierarchical-structured models with an unsupervised learning fashion.\\n\\nMoreover, by reproducing the findings in [1] with centered GDBMs, we suggest that i) the spontaneous activity in early visual cortex are the result of interactions between sensory inputs and feedbacks from higher areas, ii) thus, early visual areas are sufficient to generate the observed spontaneous activity patterns. \\n\\nTogether with the discussion of the failures to model the cortical activity with GRBMs and DBN (see details in the responses to the common comments E)), we have a new paragraph in the discussion to describe the interpretations of our results.\\n\\n2.3) \\u201cwhen collecting the spontaneous frames, presumably you are sampling from the full model distribution P(x,y,z) via Gibbs-sampling, with all layers unclamped, correct?\\u201d\\n\\nClarified - During collecting the spontaneous frames, we indeed ran sampling from the full model distribution P(X, Y, Z) via Gibbs-sampling with all layers unclamped. This sampling procedure is supposed to approximate the samples from the model\\u2019s prior distribution P(Y), which are the expected states of the Y without any knowledge of X and Z. And we use P(Y|x, z) from a single step during Gibbs sampling as an approximation of Y, which is referred as to a spontaneous frame. We added more details of the samplings procedure in section 3.2.1 and 3.2.2. See more details in the responses to common comments C).\\n\\n3) \\u201cthe connection between homeostasis and the author's model is unclear.\\u201d\\n\\nRemoved the term \\u201chomeostasis\\u201d to avoid misunderstanding. See more details in the responses to common comments D).\\n\\n4) \\u201c3.1 'sparse DBNs show worse match to the biological findings as reported in [1]. A quantitative comparison between centered GDBMs and sparse DBNs is still open for future studies.' Where was it shown to be a worse match then?\\u201d\\n\\nWe have attempted to use the (sparse) DBN to model the cortical activity. But the results show worse match to the biological results in [1] as the spontaneous frames show less correlation to the orientation maps. Both the comparison of results and the interpretation have been included in the revised discussion part. See more details in the responses to common comments E).\\n\\n5)\\n* \\u201cNot sure why x,y,z are sometimes capitalized, sometimes not.\\u201d\\n\\nWe use the upper case letters to denote the un-instantiated variables, i.e. the values of variables are not given. In contrast, the lower case letters represent the instantiated variables.\\n\\n* \\u201cFigure 2: shouldn't the angles (figure titles) go from 0 to 180?\\u201d\", \"figure_2\": \"Done \\u2013 Thanks for the hint, typo corrected. The angles should go from 0 to 180.\"}", "{\"review\": \"A revised version of this paper is now available at http://arxiv.org/abs/1312.6108.\"}" ] }
oXSw7laxwUpln
An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks
[ "Yoshua Bengio", "Mehdi Mirza", "Ian Goodfellow", "Aaron Courville", "Xia Da" ]
Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models 'forget'' how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting.
[ "empirical investigation", "catastrophic", "neural networks", "many machine", "models", "second task", "first task", "problem", "algorithms" ]
submitted, no decision
https://openreview.net/pdf?id=oXSw7laxwUpln
https://openreview.net/forum?id=oXSw7laxwUpln
ICLR.cc/2014/conference
2014
{ "note_id": [ "O1zu1tHAs_OtY", "C2_L2xJZKTCxq", "ff2Ufvs1UolkP", "COqd1FIP0RONW", "UU9xOhFI5MUib", "daI9sS8PVBdhc", "7RzcD30jSEDfe", "11PgjmvJdz2kc", "3LOact4IaBp6u", "DUdZvpNtZ_0Ox", "VIv0IzYmZ5-zX", "s2Ke2gPbSkMja", "3PpH3F-_2JyW1", "gI_cXQ6AcliQ7", "X85EX3gs7xrDS", "CCdGC5h4O_Z5L", "iBz7iIpI6PMKO", "z60J9r5MxX6tp", "rWtJWM_B7bqIc" ], "note_type": [ "comment", "review", "review", "comment", "review", "comment", "review", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "review", "comment", "comment", "comment" ], "note_created": [ 1392176640000, 1391845920000, 1391938440000, 1392775620000, 1392064140000, 1392786180000, 1389060840000, 1392786240000, 1392177960000, 1392175920000, 1392780540000, 1392175920000, 1392175560000, 1392175500000, 1392175560000, 1389001920000, 1392178140000, 1392176940000, 1392786180000 ], "note_signatures": [ [ "Ian Goodfellow" ], [ "anonymous reviewer d827" ], [ "anonymous reviewer 6ba5" ], [ "anonymous reviewer 6ba5" ], [ "anonymous reviewer 5894" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "David Krueger" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ], [ "Ian Goodfellow" ] ], "structured_content_str": [ "{\"reply\": \"'I encourage the authors to come up with a single number than can represent well the entirety of the curves shown in Figs. 1,3,5 and then plot out the influence of dropout vs. catastrophic forgetting, one curve per activation function (and per dataset, of course).'\\n\\nI'm not sure I understand this suggestion, can you elaborate? Are you saying to re-run the experiments done here but with n different values of the dropout rate? Please keep in mind that that each of these figures required 400 runs of deep net training, so generating them for multiple values of the dropout rate would be considerably expensive. The rate at which we could do this depends on the availability of shared computational resources, but we probably could not add even one point on this curve before the rebuttal period ends.\"}", "{\"title\": \"review of An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks\", \"review\": \"Summary\\nThis paper explores the behavior of a neural network when the cost function changes from an initial task to a second subsequent task. The authors construct pairs of tasks from standard benchmarks in textual sentiment classification and MNIST digit classification. For constructed pairs of tasks the authors analyze performance of networks trained with four different standard hidden unit activation functions, and dropout regularization. The neural networks used are standard architectures, and there is not anything new from a modeling perspective. The authors conclude that dropout regularization is beneficial on the datasets they evaluate, but there is not a consistent best choice of activation function.\\n\\nDetailed Review\\nThe problem of catastrophic forgetting seems irrelevant to most of the current research in deep learning, and seems to be a somewhat constructed thought experiment. The paper does not mention a practical task that requires training the same neural network on two problems in sequence. Indeed, I can\\u2019t think of such a task, and the authors should significantly rewrite the introduction to make clear why this problem is important. \\n\\nThe ability of a network to \\u201cremember\\u201d a previous task is purely an artifact of poor optimization. If the neural network cost function could be better optimized, neural networks, like SVMs, would completely forget any previous task. This is actually a good thing! Neural networks are function approximators and any \\u201cremembering\\u201d of a previous task exhibits poor performance on fitting the function of interest. This observation again points to doubts as to why catastrophic forgetting is a problem worth studying. \\n\\nIn terms of experiments, the authors do a relatively good job of exploring possible network architecture choices. My concern with the experimental setup is the contrived nature of the dataset pairs. The pairs of tasks used don\\u2019t seem representative of realistic task pairs, but again I can\\u2019t think of a task pairs which actually require sequential training. The paper seems incomplete without comparing sequential task training to jointly training on the two tasks. I suspect joint training should perform at least as well.\\n\\nWhen discussion dropout, the authors should be careful to not describe it as an alternative to SGD. Dropout is a regularization technique and can be written into the cost function of network training. SGD is an optimization technique used to optimize that cost function, so for example SGD can be replaced with conjugate gradient to train a model with dropout regularization. \\n\\nIn dropout training some papers have found significant performance impact from the setting of dropout probability. The authors argue that setting the dropout probability to 0.5 is usually best, but it would be nice to have an experimental validation of this claim.\\n\\nReview Summary\\n- Forgetting seems like a contrived problem not relevant to current deep learning research\\n- Evaluation is on toy datasets which do not require sequential task training and could easily be trained jointly\\n- No comparison is given to jointly training \\n+ Reasonable evaluation of various hidden unit activation function choices\"}", "{\"title\": \"review of An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks\", \"review\": \"The paper discusses the problem of catastrophic forgetting, in the context of deep neural networks. While this is the purported goal of it, an equally interesting outcome of this analysis is the in-depth comparison between a variety of common activation functions used with or without dropout. This analysis reveals that training with dropout is always beneficial in terms of adapting to the new task, but that the various activation functions studied can perform quite differently depending on the task in question. The authors\\u2019 analysis does suggests the maxout activation function is consistently a good choice.\\n\\nI am mystified about the comment on dropout having a \\u201cknown\\u201d range of values that works well (0.5 or 0.2, depending on whether it\\u2019s a hidden or visible unit). This may very well be true in the context of obtaining a good performance on a given set of tasks that the authors consider. But since dropout can be seen as having a regularization effect: this effect can manifest itself by less (or more) catastrophic forgetting. I encourage the authors to come up with a single number than can represent well the entirety of the curves shown in Figs. 1,3,5 and then plot out the influence of dropout vs. catastrophic forgetting, one curve per activation function (and per dataset, of course). One good single number may be the area under the curve, perhaps discounting for the left-most and right-most parts? Such an analysis might elucidate the aspects related to the increased number of parameters that dropout seems to favor in the \\u201csimilar\\u201d tasks scenario.\\n\\nOther than that, it would be nice if the authors proposed hypotheses for why maxout seems to perform better -- it is not obvious from their analysis that this should be the case.\", \"a_general_comment\": \"it seems rather odd that catastrophic forgetting seems to be somehow tied to the choice of an activation function in this paper -- this (plus dropout) cannot possibly be the only aspect that influences the magnitude of the catastrophic forgetting problem. Arguably, things like the optimization and, generally speaking, the regularization strategies employed, are equally as important. This paper touches very little upon those issues, glossing over anything other than SGD + dropout.\", \"small_comment\": \"there\\u2019s a dangling \\u201cMcClelland, James L\\u201d reference.\"}", "{\"reply\": \"Even simply re-running the experiments for one of these datasets, with n coarsely-spaced different values of dropout, would be interesting (if not in time for the rebuttal, just in general). I feel this would be an important addition to the results of the paper, and further strengthen your claims re: dropout being somehow beneficial.\"}", "{\"title\": \"review of An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks\", \"review\": \"The paper 'An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks' investigates the robustness of 'modern' feed-forward networks trained on dropout and a variety of activation functions to task-switching during training. Biological neural networks have the property that after being trained for a task A, and then trained for a different task B, they can be re-trained for A more quickly than networks that have never mastered A before. This paper investigates the capacity of various sorts of feed-forward neural networks to exhibit this property.\", \"novelty\": \"The authors state that the issue was 'well-studied in the past' but provide no references or summary of that past work. I am not familiar with that literature.\", \"quality\": \"The paper was a joy to read. As a reviewer, I want to thank the authors for taking the time to write so clearly. \\n\\nThe large parenthetical remark before section 4.1 should probably be the caption on the page with Figures 1 and 3.\", \"pro\": \"The issue of how deep networks might transfer patterns learned from one data set to another is important and topical.\\n\\nThe experiments are presented clearly and reveal an interesting new property of dropout training.\", \"con\": \"The suggestion that the first layer of a neural network is not rewired in response to a remapping of MNIST pixels, but rather upper layers handle the new task is counter-intuitive and intriguing. The paper would be stronger if the authors tested this hypothesis.\", \"comments\": \"\", \"experiments_intro\": \"a surprising omission was that prior to dropout L2 and L1 weight regularization\", \"figures_1_and_3\": \"A better caption would help these intriguing figures to stand on their own. Is a lower-envelope on a scatter plot of randomly drawn models being drawn here? Is there a modelling reason that motivates connecting the dots, or is it simply a visual aide? Did you consider leaving all of the scatter-plots on the axes (though it might be too busy)?\", \"i_think_the_experimental_protocol_used_here_is_not_quite_the_same_as_the_one_used_to_label_catastrophic_forgetting_in_neural_networks\": \"there, task A is trained to mastery, then B, then A again. The question is how fast someone can go from A to B and back, without regard for how good they are at A or B in absolute terms. Here, an early stopping criterion was used during B-training so that the question of how fast A degrades may be confounded with the networks' raw ability to do tasks A and B.\"}", "{\"reply\": \"'I am mystified about the comment on dropout having a \\u201cknown\\u201d range of values that works well (0.5 or 0.2, depending on whether it\\u2019s a hidden or visible unit). This may very well be true in the context of obtaining a good performance on a given set of tasks that the authors consider. But since dropout can be seen as having a regularization effect: this effect can manifest itself by less (or more) catastrophic forgetting.'\\n\\nThe performance curve is actually pretty flat over a broad region surrounding 0.5. See fig 2.6 of http://www.cs.toronto.edu/~nitish/msc_thesis.pdf\\nThe use of 0.5 is so standard that many descriptions of dropout, even by its inventors, omit the possibility of using a value other than 0.5. See for example section 4.2 of https://www.cs.toronto.edu/~hinton/absps/imagenet.pdf\\nKeep in mind that we found that dropout performed better than no-dropout on every single task, so this was a reasonably good setting. Further searchers can only improve the performance, at least on the validation set.\\n\\n'Other than that, it would be nice if the authors proposed hypotheses for why maxout seems to perform better -- it is not obvious from their analysis that this should be the case.'\\n\\nWe didn't really have any analysis, but we do have a hypothesis we could include in the paper. Each maxout unit contains multiple filters. Only the filter that wins the max receives any gradient at all. This means if filter i learns to be useful on task 0 but is orthogonal to most of the inputs seen in task 1, then probably filter j will be more active than filter i on task 1. Filter i thus won't be modified much during training on task 1. The final network will have filter i take on maximal activations on task 0 while filter j takes on maximal activations on task 1.\\n\\nHard LWTA has the same property, but it lacks pooling. Pooling has an additional benefit. Suppose that first layer unit k represents a concept that is useful for both task 0 and task 1. If maxout unit k contains filters i and j, with filter i working well on task 0 and filter j working well on task 1, then the second layer weight coming out of unit k doesn't need to change much in order to solve task 1. For LWTA, during the training of the second layer, there is no pooling and no concept of a 'unit k'. Filters i and j drive two separate units, which are both in block k, but the next layer has no way of knowing about the blocks a priori and must learn that filter j on task 1 means the same thing that filter i meant on task 0.\\n\\n\\n'A general comment: it seems rather odd that catastrophic forgetting seems to be somehow tied to the choice of an activation function in this paper'\\n\\nThis idea was put forward in a paper at NIPS this year (Srivastava 2013), which argued that using the hard LWTA activation function was a means of combating catastrophic forgetting. One of our main goals for this paper was to further test the idea from that paper.\\n\\n'Arguably, things like the optimization and, generally speaking, the regularization strategies employed, are equally as important. This paper touches very little upon those issues, glossing over anything other than SGD + dropout.'\\n\\nOur main objective was to determine how prone the 'modern' approaches are to catastrophic forgetting. That's why we used the max norm regularization--this was used by Geoff Hinton's group to set several state of the art results with their paper on dropout, and we also used it ourselves to set several state of the art results in our paper on maxout units.\"}", "{\"review\": \"In response to David's comments:\\n\\n-The version of the paper you saw was just an incomplete placeholder we uploaded to be able to enter the openreview submission system. We just barely uploaded a new version to ArXiv that is complete and ready to be reviewed. The new version addresses your comments, and should be visible as soon as ArXiv approves it.\\n-In particular, we added some more explanation of how the possibilities frontiers curves are generated. To be clear, they don't show performance on the new task worsening over time, as you thought. What they show is performance on the old task task worsening as performance on the new task improves. We do in fact use early stopping, so any overfitting is intrinsic to methods themselves. The points that are most overfit are probably bad at both tasks so they lie above and to the right of the frontier and have no effect on the plot.\\n-It sounds to me like you assumed one of the axes of the plots was a time axis, but I'm not sure if that was your mistake. If you read the axis labels you can see there is no time axis, both are error. You can't tell for sure which points came from what point in time, but presumably the ones on the upper left are from later in time, and the ones on the lower right are from earlier in time.\\n-We don't have any control over the author order display bug but we've asked the ICLR organizers to fix it.\"}", "{\"reply\": \"We're working on this, but as you said we won't have it in time for the rebuttal.\"}", "{\"reply\": \"' Experiments intro: a surprising omission was that prior to dropout L2 and L1 weight regularization '\\n\\nIt looks like your sentence got cut off, would you mind re-completing it?\\n\\nI'm assuming you're asking why the beginning of section 4 doesn't talk about L2 or L1 weight regularization. The reason is that the goal of section X isn't to provide a general history of the means of regularizing neural nets, but rather to describe the methods that we used in our experiments. We chose these methods based on which methods currently obtain state of the art on challenging neural network tasks. Currently, L1/L2 penalties on neural network weights are not popular. However, we do not see dropout as the replacement for these penalties. Instead, these penalties have been replaced by a constraint on the maximum norm of the weights for each hidden unit. Penalties can be seen as a means of implementing constraints using Lagrange multipliers, but penalties and constraints are not fully equivalent in the case of non-convex optimization with multiple local minima. Penalties affect the search direction regardless of the current point in parameter space, while constraints only affect the search direction at the edge of constraint space. The penalty can have some negative effects such as trapping hidden units with small weights near the origin, resulting in 'dead units.' Experimentally, we've consistently found that max-norm constraints perform significantly better than L2 penalties, in our work on both maxout (ICML 2013) and the MP-DBM (NIPS 2013). We got the idea of using the max-norm constraint instead of weight decay from Geoffrey Hinton's original arxiv paper on dropout, where it was also used to great effect. For these reasons, we felt that an empirical investigation of modern state of the art methods would do better to focus on weight norm constraints rather than weight norm penalties.\\n\\nLet me know if that was not what your question was going to be. Also, if you don't mind, please, let me know how much of this clarification should be added to the next version of the paper.\"}", "{\"reply\": \"Please delete this comment\"}", "{\"reply\": \"'The problem of catastrophic forgetting seems irrelevant to most of the current research in deep learning, and seems to be a somewhat constructed thought experiment. The paper does not mention a practical task that requires training the same neural network on two problems in sequence. Indeed, I can\\u2019t think of such a task, and the authors should significantly rewrite the introduction to make clear why this problem is important.'\\n\\nThe purpose of this work isn't to solve any specific task. It's not an engineering paper, it's a basic science paper. The goal of this paper is simply to characterize a phenomenon. We don't attempt to capitalize on the understanding of that phenomenon directly all in one paper. The purpose of publishing is to spread what we have learned about this phenomenon so that other people have an opportunity to capitalize on it. \\n\\nThere are many reasons to believe that an improved understanding of catastrophic forgetting could lead to advances in engineering applications. Here are two reasons that are of immediate commercial relevance:\\n\\n1) Underfitting on large datasets: Catastrophic forgetting is a property of stochastic gradient descent, which is currently the state of the art training method for many important problems such as object recognition, speech recognition, and drug activity prediction. It is well known that for large datasets, stochastic gradient descent can suffer from under fitting on large datasets: http://arxiv.org/abs/1301.3583 This may be because different examples have very different task profiles and the net can forget how to classify an example if it is not re-presented frequently enough. Since large datasets are the main driving force behind the recent industrial interest in deep learning, this seems especially important to understand.\\n\\n2) Reinforcement learning: When neural nets are used for reinforcement learning, catastrophic forgetting is an important issue because it is not computationally feasible to store all of the agent's experiences. Moreover, as the agent learns, the kinds of situations it has to deal with will change, but it is good if the agent remembers how to deal with a variety of situations. For example, a neural net that is first learning how to drive a car may frequently drive too aggressively and go into a skid. A net early in training will thus spend a lot of its capacity learning to get out of a skid. Later, a fully trained net will skid only very rarely because it has a good controller, but we would like it to remember how to get out of a skid in case one happens. Some experimental evidence suggests that catastrophic forgetting is in fact a serious issue for reinforcement learning. For example, Deep Mind recently demonstrated a neural reinforcement learning system that can play Atari games: http://arxiv.org/pdf/1312.5602v1.pdf The main algorithmic advance making their system perform acceptably was an experience replay mechanism.\\n\\nIn this work we don't try to completely solve either of the above issues, but rather we construct a test-bed to understand smaller scale issues that may be involved in either of the more complex scenarios.\\n\\nCatastrophic forgetting is also interesting not from an engineering perspective but from a neuroscience perspective. Neuroscience and machine learning frequently inform each other. An improved understanding of how catastrophic forgetting behaves in artificial neural networks can lead to advances in understanding of biological neural networks.\\n\\nFinally, a large part of our motivation for this paper is to correct what we see as limitations to a recent study of catastrophic forgetting that was published in NIPS this year. The inclusion of work on catastrophic forgetting in NIPS suggests that the community views it as a relevant topic.\\n\\n'The ability of a network to \\u201cremember\\u201d a previous task is purely an artifact of poor optimization. If the neural network cost function could be better optimized, neural networks, like SVMs, would completely forget any previous task. This is actually a good thing! Neural networks are function approximators and any \\u201cremembering\\u201d of a previous task exhibits poor performance on fitting the function of interest. This observation again points to doubts as to why catastrophic forgetting is a problem worth studying.'\\n\\nWe're never going to have a perfect optimization algorithm for neural networks. Optimizing them is NP-complete. Catastrophic forgetting is a property of the training algorithms we have for specific neural networks, and it is those algorithms that we study in this paper.\\n\\n'In terms of experiments, the authors do a relatively good job of exploring possible network architecture choices. My concern with the experimental setup is the contrived nature of the dataset pairs. The pairs of tasks used don\\u2019t seem representative of realistic task pairs, but again I can\\u2019t think of a task pairs which actually require sequential training.'\\n\\nThe task pairs are chosen to elicit characteristics of the training algorithms we use, not to be commercially interesting.\\n\\n'The paper seems incomplete without comparing sequential task training to jointly training on the two tasks. I suspect joint training should perform at least as well.'\\n\\nYes, obviously joint training will perform better, but I'm not sure why we would be interested in how it performs. Our goal isn't to obtain the best possible performance on two tasks, but rather to determine what happens to the performance on the first task when we train on a second task.\\n\\n'When discussion dropout, the authors should be careful to not describe it as an alternative to SGD. Dropout is a regularization technique and can be written into the cost function of network training. SGD is an optimization technique used to optimize that cost function, so for example SGD can be replaced with conjugate gradient to train a model with dropout regularization.'\\n\\nWe can rewrite the paper to make this clear.\\n\\n'In dropout training some papers have found significant performance impact from the setting of dropout probability. The authors argue that setting the dropout probability to 0.5 is usually best, but it would be nice to have an experimental validation of this claim.'\\n\\nThis has been experimentally validated before, see for example fig 2.6 of http://www.cs.toronto.edu/~nitish/msc_thesis.pdf . We can add this pointer to the paper.\\nTo our knowledge, the only time that a deviation from 0.5 has been valuable is on the input to the network or on convolutional layers.\"}", "{\"reply\": \"Please delete this comment\"}", "{\"reply\": \"'In dropout training some papers have found significant performance impact from the setting of dropout probability.' Could you specify which papers? To my knowledge, the deviation from p=0.5 is only significantly beneficial on input units or on convolutional layers.\"}", "{\"reply\": \"'In dropout training some papers have found significant performance impact from the setting of dropout probability.'\\nCould you specify which papers? To my knowledge, the deviation from p=0.5 is only significantly beneficial on input units or on convolutional layers.\"}", "{\"reply\": \"In dropout training some papers have found significant performance impact from the setting of dropout probability\"}", "{\"review\": \"Main comment: I would add 'dropout' to the abstract and even the title. It seems like the main result is that dropout helps ameliorate catastrophic forgetting.\\n\\nI also don't understand why the production possibility frontier curves don't decrease monotonically. \\n\\nMy impression is that you create these curves by training the network (initialized via training for the old task) on the new task and then at various points during training, examining test performance on both tasks. So then why does test error go up on the new task when you are training the network to perform this task? Is this just overfitting? If so, why not do early stopping? Or am I incorrect about how these plots are generated?\", \"little_things\": \"The authors are listed in a different order on this site vs. \\n\\non the paper.\\nHyperparameters is misspelled a few times. \\n3. I would add citation for hard LWTA (at least), I think \\n\\nthis stands for local winner takes all, but it took me a bit \\n\\nto figure that out. \\n3. 'and being training on the 'new task'' -> 'and BEGIN \\n\\ntraining on the new task'\\n\\n3.3 '...two cases that are NOT semantically similar' \\n\\n(missing not)\\n\\nFor the graphs, I would color match the dropout vs. SGD \\n\\nmodels with the same conditions.\\n\\nAlso, did you train the dropout nets using something other \\n\\nthan SGD? If not, I would clarify this and rename the 'SGD' \\n\\nmodels as 'no dropout' or something like that.\"}", "{\"reply\": \"'I think the experimental protocol used here is not quite the same as the one used to label catastrophic forgetting in neural networks: there, task A is trained to mastery, then B, then A again. The question is how fast someone can go from A to B and back, without regard for how good they are at A or B in absolute terms.'\\n\\nCould you give us a reference for one of these papers?\"}", "{\"reply\": \"'The paper was a joy to read. As a reviewer, I want to thank the authors for taking the time to write so clearly.'\\nThanks, we're glad you appreciated our efforts.\"}", "{\"reply\": \"'The authors state that the issue was 'well-studied in the past' but provide no references or summary of that past work. I am not familiar with that literature.'\", \"a_reasonably_good_review_is_available_here\": \"http://ox.no/files/catastrophic_forgetting_in_neural_networks.pdf\\nWe can cite this paper and / or write more of an overview in our own introduction for the final copy if desired.\\n\\n'The suggestion that the first layer of a neural network is not rewired in response to a remapping of MNIST pixels, but rather upper layers handle the new task is counter-intuitive and intriguing. The paper would be stronger if the authors tested this hypothesis.'\\n\\nThanks for asking us to check this. It looks like our visual inspection of the weights was incorrect:\\nTask 1 test error, using all parameters trained on task 1: 1.44%\\nTask 1 test error, restoring the layer 0 parameters to those trained on task 0: 78.97%\\nTask 1 test error, restoring the layer 1 parameters to those trained on task 0: 1.65%\\nThe network does appear to be doing the desired behavior after all. We'll change the claim and include these new experiments in the final copy.\\n\\n\\n'Figures 1 and 3: A better caption would help these intriguing figures to stand on their own. Is a lower-envelope on a scatter plot of randomly drawn models being drawn here? Is there a modelling reason that motivates connecting the dots, or is it simply a visual aide? Did you consider leaving all of the scatter-plots on the axes (though it might be too busy)?'\\n\\nWe will attempt to improve the caption for the final copy. Yes, it is a lower envelope. The idea of drawing the envelope is just to indicate the best performance we were able to demonstrate is obtainable by varying the hyper parameters. The existence of a 'bad' point in the scatterplot is not necessarily meaningful, since we can easily reach bad points for any method just by setting the hyper parameters badly. The existence of 'good points' is meaningful though, since they demonstrate that each method can perform at least as well as that point indicates. The scatterplot could be somewhat useful for understanding how robust the various methods are to hyper parameter choices, though the full scatterplot is very busy. We can add the full scatterplots as an appendix.\"}" ] }
II-mIcAshLID0
Stopping Criteria in Contrastive Divergence: Alternatives to the Reconstruction Error
[ "David Buchaca", "Enrique Romero", "Ferran Mazzanti", "Jordi Delgado" ]
Restricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions. RBMs are often trained using the Contrastive Divergence learning algorithm (CD), an approximation to the gradient of the data log-likelihood. A simple reconstruction error is often used to decide whether the approximation provided by the CD algorithm is good enough, though several authors (Schulz et al., 2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of this procedure. However, not many alternatives to the reconstruction error have been used in the literature. In this manuscript we investigate simple alternatives to the reconstruction error in order to detect as soon as possible the decrease in the log-likelihood during learning.
[ "reconstruction error", "contrastive divergence", "criteria", "alternatives", "rbms", "approximation", "boltzmann machines", "generative models", "data distributions" ]
submitted, no decision
https://openreview.net/pdf?id=II-mIcAshLID0
https://openreview.net/forum?id=II-mIcAshLID0
ICLR.cc/2014/conference
2014
{ "note_id": [ "9ZGs9cekBp9w0", "byzJbRlNaEb8x", "wxTahwWyXKwlI", "xxEOxtQ0ZytgK", "mmVw9BfnguBDe", "OmXwbbinULbGy", "1qNo1qzRow7I0", "ETi6EqZjQ1tFe", "YjyZY1Uuwzj8p", "Y56-5VXmnRYoH" ], "note_type": [ "review", "comment", "review", "comment", "review", "review", "review", "review", "review", "comment" ], "note_created": [ 1391877840000, 1392825420000, 1390497300000, 1392825660000, 1390497240000, 1391496480000, 1390497240000, 1389665160000, 1390885920000, 1392825540000 ], "note_signatures": [ [ "anonymous reviewer 7bb8" ], [ "Jordi Delgado" ], [ "David Buchaca Prats" ], [ "Jordi Delgado" ], [ "David Buchaca Prats" ], [ "anonymous reviewer 0ef5" ], [ "David Buchaca Prats" ], [ "KyungHyun Cho" ], [ "anonymous reviewer 4ea5" ], [ "Jordi Delgado" ] ], "structured_content_str": [ "{\"title\": \"review of Stopping Criteria in Contrastive Divergence: Alternatives to the Reconstruction Error\", \"review\": \"This paper introduces an early-stopping criteria, to address the potential degeneracy issues in CD-training of Restricted Boltzmann Machine. The method is closely related to the method of Free-Energy differences (FED), which computes the difference of log probabilities between the training set X and a held-out validation set Y. In lieu of a validation set however, the authors propose to sample Y to points far from the training data (for which the log-prob should monotonically decrease during training). To do this, the authors propose sampling Y from the conditional distribution p(x | h), where h is either drawn uniformly over binary states, or clamped to the 'complement' of the infered latent states when conditioned on training data, i.e. h = 1-h^(i) with $h^(i) ~ p(h | x^(i))$.\\n\\nGiven the similarities to FED, the method is not entirely novel. The idea of biasing the Monte Carlo approximation of log Z to states $v$ sampled from the conditional p(v|h) is interesting and reminiscent of [1,2]: a uniform prior over h could be a quick and dirty way to explore the space of visible configurations having some probability mass under the model. The second option seems less justified in my opinion: why use the complement of h^(i) instead of h^(i) itself ? The former would allow to quickly sample (local) perturbations of x^(i), which seems more inline with the CD training criteria (which only raises the energy of nearby configurations).\\n\\nAll in all the paper is interesting, but much better suited to the workshop track of the conference. The method is not entirely novel and the experiments are still very preliminary.\\n(1) The datasets are very small and would need to be scaled to more realistic datasets, using AIS as a proxy to the true likelihood.\\n(2) The evidence in favor of the proposed criteria is rather weak. The behavior of the first criterion changes completely based on the dataset, while the second criterion fails when using CD-10 (with no clear explanation why). Also, the early-stopping point obtained by (ii) on the LSE dataset remains rather approximate. Using this criterion would have yielded a much lower likelihood than possible.\\n(3) The authors seem unaware of the FED method and do not compare against it.\\n\\nAs the proposed criteria are based on heuristics, they require very solid empirical evidence to justify their use. As it stands, the paper simply does not deliver.\", \"other\": [\"paper glosses over another widely used early-stopping heuristic: classification error.\", \"superfluous citation to (Bengio, 2009) for efficient block Gibbs sampling of RBMs.\", \"latex bug for visible and hidden unit biases in Eq 5\", \"'In doing so, drastic approximations [...] are performed'. Do not understand this sentence.\", \"'making zeta when learning is achieved' ?? Learning is an optimization process not an discrete time event.\", \"'A second possibility is to suitably compute [...] during data reconstruction'. Very clumsy way to say use 1-h^(i). The reconstruction phase conditions on h, therefore 'the value they should take during [...] reconstruction' is nonsensical.\", \"'Anyway, the behavior of the proposed criteria [...] should be further studied.'. Very uninformative statement whose writing style seems *very* inappropriate for a paper submission.\", \"'This westimator works well for CD1 but [not] for CD10, which is' - Missing [not]\", \"[1] Gr\\u00e9goire Mesnil, Salah Rifai, Yann Dauphin, Yoshua Bengio and Pascal Vincent, Surfing on the Manifold, Learning Workshop, Snowbird, 2012.\", \"[2] Yoshua Bengio, Gr\\u00e9goire Mesnil, Yann Dauphin and Salah Rifai, Better Mixing via Deep Representations, in: Proceedings of the 30th International Conference on Machine Learning (ICML'13), ACM, 2013\"]}", "{\"reply\": \"> Dear authors,\\n >\\n > Let me reveal my identity before I continue. I'm Kyunghyun Cho who\\n > wrote the earlier review (by an unexpected coincidence). The previous\\n > comments reflect what I wanted/want to say as an official reviewer,\\n > and I will not write another separate review.\\n >\\n\\nThanks for revealing your identity. It's nice to response to the\\ncomments of a reviewer previous to become a reviewer.\\n\\n > Instead, let me answer briefly to the response to my review from the\\n > authors:\\n >\\n > (Authors) 'If you change the value of the hidden units in any way,\\n > this delicate equilibrium is broken and you expect to get much lower\\n > probabilities for the visible variables.'\\n >\\n > => I cannot agree with this. Intuitively, if we believe that training\\n > makes latent variables learn potentially lower-dimensional manifold on\\n > which training samples lie, any small (or even large) change in the\\n > latent representation shouldn't correspond to a change in the input\\n > space that moves the point away from the manifold. I'd agree with your\\n > argument much more, if you were flipping all the bits of the input\\n > variable.\\n >\\n\\nIn our understanding, the situation you point out happens after the\\nRBM has successfully learned the data distribution (that is, when the\\ntraining has finished). During the early stages of the training\\nprocess, in contrast, we think that this property may not hold, since\\nthe weights are not good enough yet. We considered the possibility of\\nflipping the bits of the input data, but it would prevent working with\\nsymmetric data (like the 'Bars and Stripes' data set, for example).\\n\\n > (Authors) 'CD_1 as it is the cheapest way we know'\\n >\\n > => Computationally, I don't see why CD-1 should be cheaper than\\n > PCD. Memory-wise, if you follow the usual practice of maintaining only\\n > a few persistent samples, it shouldn't matter too much as\\n > well. Though, I agree that it's easier to train an RBM with CD-1 using\\n > a much higher learning rate, which may make learning progress faster.\\n >\\n\\nWe agree with you. Comparing CD_1 and PCD, only memory requirements\\nare different. As previously stated, it would be worth to see if the\\nproposed criterion works for PCD in a similar way than for CD_1.\"}", "{\"review\": \"Dear Kyunghyun Cho,\\n\\nwe appreciate very much the effort you've made to carefully read our\\npreprint. After discussing the topics you've raised we have tried to\\nwork out a new version of the paper that will (hopefully) clarify some\\naspects of our work. In the meantime we want to answer (at least some)\\nof your comments and questions.\\n\\n- The proposed stopping criterion itself is not so novel, in my\\n opinion. Essentially, it is a general framework in which, for\\n instance, CD minimizes, where $y^{(i)}$ is obtained by a single-step\\n Gibbs sampling from $x^{(i)}$. In fact, if $y^{(i)}$ is one step of\\n persistent MCMC chain, the stopping criterion corresponds to the\\n approximate maximum likelihood criterion used by PCD. In short, the\\n proposed criterion is, in fact, what a proper MLE should minimize.\\n\\n--- We are not sure to understand your comment regarding PCD. To our\\n understanding, PCD may also lead to a decreasing log-likelihood,\\n and the reconstruction error is not a good stopping criterion for\\n PCD either (see for instance Fischer & Igel, 2010). In PCD, the\\n value of $y^{(i)}$ is different from the ones defined in our\\n paper, and it is used for other purposes. Still, we agree with you\\n that the idea of computing $y^{(i)}$ in a single Gibbs step (in\\n whatever the recipe you use) and get a statistical estimator from\\n them, is not so novel (actually CD1 does that already). As you\\n mention below, the novelty lies on the way we choose $y^{(i)}$.\\n But in any case we agree we can make more emphasis on that aspect\\n in a newer version of the paper.\\n\\n- The novelty is, however, in the proposed ways to choose\\n $y^{(i)}$. The authors proposed two alternatives. Unfortunately (as\\n somewhat expected) the first method of selecting random points does\\n not work, which is only natural as if it did, why would anyone use\\n MCMC to estimate the model statistics of RBM? The second\\n alternative of using the visible samples conditioned on the\\n *flipped* hidden states seems to work better, but the authors do not\\n provide any good justification for this choice. \\n\\n--- Our justifications are more based on intuitions. We are currently\\n working on the mathematical grounds behind the ideas, but this is\\n still work in progress. But in any case the experimental results\\n are promising and seem to support our conclusions, which is the\\n main reason why we submitted the paper. In our mind, however, the\\n idea behind the work is slightly different from what you point\\n out. We believe the new stopping criteria will work better in\\n large spaces where the subset of states having a non-negligible\\n probability is small. In these systems, only very precise\\n combinations of weights and unit values leads to large\\n probabilities. If you change the value of the hidden units in any\\n way, this delicate equilibrium is broken and you expect to get\\n much lower probabilities for the visible variables. And you can do\\n that by either getting the proper value of the hidden units and take\\n their complementary (the criterion that works better, apparently),\\n or simply by changing them at random. In the learning process, one uses\\n MCMC because, in general, one wants to do the opposite. That is, to\\n identify the (small) regions of large probability, which is a\\n difficult task if the space is large and the set of relevant\\n states is small. But that implies that identifying states with low\\n probability should be much easier, and our hope is to find any of\\n these by generating values of the hidden variables that are very\\n different from the right ones. We believe that this is not clearly\\n seen on the experiments reported because the size of the spaces\\n analyzed is not that large. We plan to extend our numerical\\n analysis to other, larger systems where this should be more\\n evident.\\n\\n- The experiments show that the second choice is superior, but since\\n the experiments are rather small-scale, it is difficult to make a\\n solid conclusion from them. Also, the proposed method seems to only\\n work with CD_1 (see Fig. 4), but the authors' explanation on\\n possible reasons for this is not clear. \\n\\n--- Actually we are mostly interested in CD_1 as it is the cheapest\\n way we know (with respect to time and memory storage) to evaluate\\n the correlations required at the learning stage. We must admit,\\n however, that the reason why it seems to perform worse in CD_k is\\n somewhat unclear to us. As we said above, we are working on the\\n mathematical grounds of the method and we hope to have a more\\n clear understanding of the fine details of the method in the near\\n future. \\n\\n- One thing which is clearly missing from the experiments is the\\n comparison of the proposed criterion against the actual function CD\\n minimizes ($E_D left[ log p (x^{(i)}) - log p(y^{(i)}) \\night]$,\\n where $y^{(i)}$ is the one-step Gibbs sample starting from\\n $x^{(i)}$.) as well as the approximate likelihood (the same thing,\\n but use samples from persistent chains as $y^{(i)}$'s). Computing\\n these things should be no more expensive than computing the proposed\\n criterion, and may give better indication of how log-likelihood\\n evolves over time. This will be interesting to see. \\n\\n--- We have some of these results, but we decided not to include them\\n in the paper to limit its extensions. In any case we have seen\\n that the behavior is similar to the reconstruction error, and thus\\n show the same problems once compared to the likelihood itself. In\\n the paper we only included 'somewhat positive' results, but we\\n agree with you that these results may be interesting to be\\n discussed. \\n\\n- Of course, one last thing is, why anyone would try to train an RBM\\n to be a good generative model (maximizing the log-likelihood) using\\n CD in the first place, while PCD (with possibly other MCMC\\n algorithms than Gibbs) is available.\\n\\n\\u00a0--- The paper is not trying to answer this question. We just want to\\n\\u00a0 \\u00a0 \\u00a0explore new ways to improve the stopping criteria in a learning\\n\\u00a0 \\u00a0 \\u00a0algorithm where the reconstruction error does not work properly\\n\\u00a0 \\u00a0 \\u00a0(in general), and we have tested it in CD_1. Since the\\n\\u00a0 \\u00a0 \\u00a0reconstruction error is not a good stopping criterion for PCD\\n\\u00a0 \\u00a0 \\u00a0either, PCD may also benefit from other options. It would be\\n\\u00a0 \\u00a0 \\u00a0interesting to see if the same stopping criteria work properly\\n\\u00a0 \\u00a0 \\u00a0with PCD, but that we leave for a future work.\\n\\n-Despite these seemingly negative comments, I enjoyed reading the\\n paper, since it is clearly written and well motivated. A few more\\n experiments showing that the proposed methods of choosing $y^{(i)}$\\n are better than other possible choices (e.g., using Gibbs sampling) \\n would make the paper much stronger. \\n\\n--- Thanks a lot for the comments and suggestions. We agree with you\\n that there is more room to improve, and we are already working on\\n that. But at this point we didn't want to miss the opportunity to\\n submit or recent results to this conference in order to discuss\\n our ideas with other people with lots of expertise in the\\n field. Still we hope to follow this line and to get more\\n exhaustive results in the near future.\\n\\n- P. S. I wouldn't mind, if you cited my paper (IJCNN 2013) next to\\n (Desjardins et al., 2010) where I also proposed PT. \\n\\n--- Consider it done\"}", "{\"reply\": \"> This paper introduces an early-stopping criteria, to address the\\npotential degeneracy\\n > issues in CD-training of Restricted Boltzmann Machine. The method is \\nclosely related to\\n > the method of Free-Energy differences (FED), which computes the \\ndifference of log\\n > probabilities between the training set X and a held-out validation \\nset Y. In lieu of a\\n > validation set however, the authors propose to sample Y to points far \\nfrom the training\\n > data (for which the log-prob should monotonically decrease during \\ntraining). To do this,\\n > the authors propose sampling Y from the conditional distribution p(x \\n| h), where h is\\n > either drawn uniformly over binary states, or clamped to the \\n'complement' of the infered\\n > latent states when conditioned on training data, i.e. h = 1-h^(i) \\nwith $h^(i) ~ p(h | x^(i))$.\\n >\\n > Given the similarities to FED, the method is not entirely novel. The \\nidea of biasing the\\n > Monte Carlo approximation of log Z to states $v$ sampled from the \\nconditional p(v|h) is\\n > interesting and reminiscent of [1,2]: a uniform prior over h could be \\na quick and dirty way\\n > to explore the space of visible configurations having some \\nprobability mass under the\\n > model. The second option seems less justified in my opinion: why use \\nthe complement of\\n > h^(i) instead of h^(i) itself ? The former would allow to quickly \\nsample (local) perturbations\\n > of x^(i), which seems more inline with the CD training criteria \\n(which only raises the\\n > energy of nearby configurations).\\nIn general we agree that using part of the examples as a training set \\nand the rest as a\\nvalidation set is a good way to proceed. However in the studied data \\nsets that separation\\nwould lead to significant information loss that may lead to wrong results\\n(for instance in the bars and stripes problem one'd better show the \\nnetwork all\\npossible instances).\\n\\nIn addition, there is a fundamental difference between FED and the \\nproposed criterion,\\nwhich is the fact that in our case we want to compare the probabilities \\nof data from\\nthe training set (that should be high) with the probabilities of data in \\nthe complementary\\nsubspace (that should be low). In FED, the probabilities of the compared \\ndata must be\\nhigh in both cases.\\n\\n > All in all the paper is interesting, but much better suited to the \\nworkshop track of the\\n > conference. The method is not entirely novel and the experiments are \\nstill very\\n > preliminary.\\n > (1) The datasets are very small and would need to be scaled to more \\nrealistic datasets,\\n > using AIS as a proxy to the true likelihood.\\n\\nWe agree that it is interesting to explore the scalability with the \\nsystem size. However\\nin order to check against exact results (computation of the likelihood) \\none is restricted\\nto medium or small spaces. Furthermore we are also aware that AIS may \\nalso fail\\nin some cases (we already mention that in our paper and refer to \\nreference Schult et al. 2010).\\n\\n > (2) The evidence in favor of the proposed criteria is rather weak. \\nThe behavior of the first\\n > criterion changes completely based on the dataset, while the second \\ncriterion fails when\\n > using CD-10 (with no clear explanation why).\\n\\nOur justifications are more based on intuitions. We are currently \\nworking on the mathematical\\ngrounds behind the ideas, but this is still work in progress. But in any \\ncase the experimental\\nresults are promising and seem to support our conclusions, which is the \\nmain reason why we\\nsubmitted the paper.\\n\\n > Also, the early-stopping point obtained by (ii)\\n > on the LSE dataset remains rather approximate. Using this criterion \\nwould have yielded a\\n > much lower likelihood than possible.\\n\\nOur interpretation of these results is quite different. The estimator \\ngrows very quickly\\n(as the log-likelihood does) and reaches a maximum very close to the \\nregion where the\\nlog-likelihood is maximum also. In addition, the shapes of the curves \\nare very similar,\\nshowing that the criterion is gathering the essence of the changes in \\nthe log-likelihood.\\n\\n > (3) The authors seem unaware of the FED method and do not compare \\nagainst it.\\n\\nWe will include references to the FED method in the new revision.\"}", "{\"review\": \"Dear Kyunghyun Cho,\\n\\nwe appreciate very much the effort you've made to carefully read our\\npreprint. After discussing the topics you've raised we have tried to\\nwork out a new version of the paper that will (hopefully) clarify some\\naspects of our work. In the meantime we want to answer (at least some)\\nof your comments and questions.\\n\\n- The proposed stopping criterion itself is not so novel, in my\\n opinion. Essentially, it is a general framework in which, for\\n instance, CD minimizes, where $y^{(i)}$ is obtained by a single-step\\n Gibbs sampling from $x^{(i)}$. In fact, if $y^{(i)}$ is one step of\\n persistent MCMC chain, the stopping criterion corresponds to the\\n approximate maximum likelihood criterion used by PCD. In short, the\\n proposed criterion is, in fact, what a proper MLE should minimize.\\n\\n--- We are not sure to understand your comment regarding PCD. To our\\n understanding, PCD may also lead to a decreasing log-likelihood,\\n and the reconstruction error is not a good stopping criterion for\\n PCD either (see for instance Fischer & Igel, 2010). In PCD, the\\n value of $y^{(i)}$ is different from the ones defined in our\\n paper, and it is used for other purposes. Still, we agree with you\\n that the idea of computing $y^{(i)}$ in a single Gibbs step (in\\n whatever the recipe you use) and get a statistical estimator from\\n them, is not so novel (actually CD1 does that already). As you\\n mention below, the novelty lies on the way we choose $y^{(i)}$.\\n But in any case we agree we can make more emphasis on that aspect\\n in a newer version of the paper.\\n\\n- The novelty is, however, in the proposed ways to choose\\n $y^{(i)}$. The authors proposed two alternatives. Unfortunately (as\\n somewhat expected) the first method of selecting random points does\\n not work, which is only natural as if it did, why would anyone use\\n MCMC to estimate the model statistics of RBM? The second\\n alternative of using the visible samples conditioned on the\\n *flipped* hidden states seems to work better, but the authors do not\\n provide any good justification for this choice. \\n\\n--- Our justifications are more based on intuitions. We are currently\\n working on the mathematical grounds behind the ideas, but this is\\n still work in progress. But in any case the experimental results\\n are promising and seem to support our conclusions, which is the\\n main reason why we submitted the paper. In our mind, however, the\\n idea behind the work is slightly different from what you point\\n out. We believe the new stopping criteria will work better in\\n large spaces where the subset of states having a non-negligible\\n probability is small. In these systems, only very precise\\n combinations of weights and unit values leads to large\\n probabilities. If you change the value of the hidden units in any\\n way, this delicate equilibrium is broken and you expect to get\\n much lower probabilities for the visible variables. And you can do\\n that by either getting the proper value of the hidden units and take\\n their complementary (the criterion that works better, apparently),\\n or simply by changing them at random. In the learning process, one uses\\n MCMC because, in general, one wants to do the opposite. That is, to\\n identify the (small) regions of large probability, which is a\\n difficult task if the space is large and the set of relevant\\n states is small. But that implies that identifying states with low\\n probability should be much easier, and our hope is to find any of\\n these by generating values of the hidden variables that are very\\n different from the right ones. We believe that this is not clearly\\n seen on the experiments reported because the size of the spaces\\n analyzed is not that large. We plan to extend our numerical\\n analysis to other, larger systems where this should be more\\n evident.\\n\\n- The experiments show that the second choice is superior, but since\\n the experiments are rather small-scale, it is difficult to make a\\n solid conclusion from them. Also, the proposed method seems to only\\n work with CD_1 (see Fig. 4), but the authors' explanation on\\n possible reasons for this is not clear. \\n\\n--- Actually we are mostly interested in CD_1 as it is the cheapest\\n way we know (with respect to time and memory storage) to evaluate\\n the correlations required at the learning stage. We must admit,\\n however, that the reason why it seems to perform worse in CD_k is\\n somewhat unclear to us. As we said above, we are working on the\\n mathematical grounds of the method and we hope to have a more\\n clear understanding of the fine details of the method in the near\\n future. \\n\\n- One thing which is clearly missing from the experiments is the\\n comparison of the proposed criterion against the actual function CD\\n minimizes ($E_D left[ log p (x^{(i)}) - log p(y^{(i)}) \\night]$,\\n where $y^{(i)}$ is the one-step Gibbs sample starting from\\n $x^{(i)}$.) as well as the approximate likelihood (the same thing,\\n but use samples from persistent chains as $y^{(i)}$'s). Computing\\n these things should be no more expensive than computing the proposed\\n criterion, and may give better indication of how log-likelihood\\n evolves over time. This will be interesting to see. \\n\\n--- We have some of these results, but we decided not to include them\\n in the paper to limit its extensions. In any case we have seen\\n that the behavior is similar to the reconstruction error, and thus\\n show the same problems once compared to the likelihood itself. In\\n the paper we only included 'somewhat positive' results, but we\\n agree with you that these results may be interesting to be\\n discussed. \\n\\n- Of course, one last thing is, why anyone would try to train an RBM\\n to be a good generative model (maximizing the log-likelihood) using\\n CD in the first place, while PCD (with possibly other MCMC\\n algorithms than Gibbs) is available.\\n\\n\\u00a0--- The paper is not trying to answer this question. We just want to\\n\\u00a0 \\u00a0 \\u00a0explore new ways to improve the stopping criteria in a learning\\n\\u00a0 \\u00a0 \\u00a0algorithm where the reconstruction error does not work properly\\n\\u00a0 \\u00a0 \\u00a0(in general), and we have tested it in CD_1. Since the\\n\\u00a0 \\u00a0 \\u00a0reconstruction error is not a good stopping criterion for PCD\\n\\u00a0 \\u00a0 \\u00a0either, PCD may also benefit from other options. It would be\\n\\u00a0 \\u00a0 \\u00a0interesting to see if the same stopping criteria work properly\\n\\u00a0 \\u00a0 \\u00a0with PCD, but that we leave for a future work.\\n\\n-Despite these seemingly negative comments, I enjoyed reading the\\n paper, since it is clearly written and well motivated. A few more\\n experiments showing that the proposed methods of choosing $y^{(i)}$\\n are better than other possible choices (e.g., using Gibbs sampling) \\n would make the paper much stronger. \\n\\n--- Thanks a lot for the comments and suggestions. We agree with you\\n that there is more room to improve, and we are already working on\\n that. But at this point we didn't want to miss the opportunity to\\n submit or recent results to this conference in order to discuss\\n our ideas with other people with lots of expertise in the\\n field. Still we hope to follow this line and to get more\\n exhaustive results in the near future.\\n\\n- P. S. I wouldn't mind, if you cited my paper (IJCNN 2013) next to\\n (Desjardins et al., 2010) where I also proposed PT. \\n\\n--- Consider it done\"}", "{\"title\": \"review of Stopping Criteria in Contrastive Divergence: Alternatives to the Reconstruction Error\", \"review\": \"This paper investigates stopping criteria for training of RBMs by contrastive\\ndivergence (CD). Traditional reconstruction error is compared to a ratio of\\nprobabilities, namely the probabilities of the training data divided by the\\nprobabilities of an equal number of sampled points (with two variants of the\\nsampling strategy). By dividing probabilities, the partition function cancels\\nout, which makes computations tractable. Experiments on two toy datasets show\\nthat the two proposed variants are overall more useful than reconstruction\\nerror to identify the point of maximum likelihood on training data, even though\\nthey do not work on all cases investigated here.\\n\\nThe problem of early stopping of RBM training is definitely a relevant one,\\nsince the intractability of the partition function prevents properly monitoring\\nthe likelihood. The approach suggested here, however, lacks in both theoretical\\nmotivation and empirical evaluation, thus in my opinion is not quite ready for\\npublication.\\n\\nOn the theoretical side, the proposed criterion (eq. 8) is only heuristicallymotivated. Note in particular that a model might be able to reach an arbitraryhigh value of this criterion if P(y_i) -> 0 for some y_i, regardless or howbadly it may perform on the training x_i's. I'm also not sure why one would\\nnecessarily take N y_i's: why not generalize it to M y_i's, using Pi_i\\nP(y_i)^(N/M), so as to be able to choose the sample size based on available\\ncomputational resources? Finally, I am not convinced by the choice of y_i's:\\nfirst, the sampling rules (random h or '1 - h_i') are not well motivated (there\\nis no guarantee that we will not sample training points), second, since they\\nare defined from the model parameters they lead to a set of y_i's that evolves\\nduring training (thus the criterion may be unstable), third the y_i's are\\nexpectations and there is no explanation on whether it makes sense for binary\\nRBMs.\\n\\nOn the empirical side, my first concern is that experiments are performed\\non low-dimensional toy datasets, and there is nothing to tell us that \\nbehavior observed on such datasets will actually translate into higher\\ndimensional tasks. In particular, sampling-based methods tend to behave rather\\nnicely in low dimension, but may break horribly as the dimension increases...\\nThus it would have been good to add experiments in high dimension, for instance\\nusing AIS to estimate the partition function. My second concern is that only\", \"training_errors_are_reported\": \"although they are definitely interesting to\\nmonitor, someone using reconstruction error as a stopping criterion will always\\nuse a validation set for this, and will hope to stop at a point where validation\\nlog-likelihood is maximized. The comparisons in the paper are thus, for the\\nmost part, uninformative, since they only use the training data.\", \"a_few_more_minor_points\": \"- Eq. 5 is missing some characters\\n- The number of hidden units is not mentioned in the experiments\\n- Plots show 'reconstruction error' as something that is better when it increases,\\n which is counter-intuitive for an error\\n- Something potentially worth discussing is that RBMs are often used for\\n pre-training purpose in deep networks, and it is not clear that better\\n likelihood => better pre-training (if there is work on this topic, it should\\n be cited, as it is important to motivate this direction of research)\\n- Another application worth mentioning to this kind of technique is model\\n selection (which RBM is best?) => the proposed criterion may require a bit\\n of tweaking to answer this kind of question (common y_i's are needed)\"}", "{\"review\": \"Dear Kyunghyun Cho,\\n\\nwe appreciate very much the effort you've made to carefully read our\\npreprint. After discussing the topics you've raised we have tried to\\nwork out a new version of the paper that will (hopefully) clarify some\\naspects of our work. In the meantime we want to answer (at least some)\\nof your comments and questions.\\n\\n- The proposed stopping criterion itself is not so novel, in my\\n opinion. Essentially, it is a general framework in which, for\\n instance, CD minimizes, where $y^{(i)}$ is obtained by a single-step\\n Gibbs sampling from $x^{(i)}$. In fact, if $y^{(i)}$ is one step of\\n persistent MCMC chain, the stopping criterion corresponds to the\\n approximate maximum likelihood criterion used by PCD. In short, the\\n proposed criterion is, in fact, what a proper MLE should minimize.\\n\\n--- We are not sure to understand your comment regarding PCD. To our\\n understanding, PCD may also lead to a decreasing log-likelihood,\\n and the reconstruction error is not a good stopping criterion for\\n PCD either (see for instance Fischer & Igel, 2010). In PCD, the\\n value of $y^{(i)}$ is different from the ones defined in our\\n paper, and it is used for other purposes. Still, we agree with you\\n that the idea of computing $y^{(i)}$ in a single Gibbs step (in\\n whatever the recipe you use) and get a statistical estimator from\\n them, is not so novel (actually CD1 does that already). As you\\n mention below, the novelty lies on the way we choose $y^{(i)}$.\\n But in any case we agree we can make more emphasis on that aspect\\n in a newer version of the paper.\\n\\n- The novelty is, however, in the proposed ways to choose\\n $y^{(i)}$. The authors proposed two alternatives. Unfortunately (as\\n somewhat expected) the first method of selecting random points does\\n not work, which is only natural as if it did, why would anyone use\\n MCMC to estimate the model statistics of RBM? The second\\n alternative of using the visible samples conditioned on the\\n *flipped* hidden states seems to work better, but the authors do not\\n provide any good justification for this choice. \\n\\n--- Our justifications are more based on intuitions. We are currently\\n working on the mathematical grounds behind the ideas, but this is\\n still work in progress. But in any case the experimental results\\n are promising and seem to support our conclusions, which is the\\n main reason why we submitted the paper. In our mind, however, the\\n idea behind the work is slightly different from what you point\\n out. We believe the new stopping criteria will work better in\\n large spaces where the subset of states having a non-negligible\\n probability is small. In these systems, only very precise\\n combinations of weights and unit values leads to large\\n probabilities. If you change the value of the hidden units in any\\n way, this delicate equilibrium is broken and you expect to get\\n much lower probabilities for the visible variables. And you can do\\n that by either getting the proper value of the hidden units and take\\n their complementary (the criterion that works better, apparently),\\n or simply by changing them at random. In the learning process, one uses\\n MCMC because, in general, one wants to do the opposite. That is, to\\n identify the (small) regions of large probability, which is a\\n difficult task if the space is large and the set of relevant\\n states is small. But that implies that identifying states with low\\n probability should be much easier, and our hope is to find any of\\n these by generating values of the hidden variables that are very\\n different from the right ones. We believe that this is not clearly\\n seen on the experiments reported because the size of the spaces\\n analyzed is not that large. We plan to extend our numerical\\n analysis to other, larger systems where this should be more\\n evident.\\n\\n- The experiments show that the second choice is superior, but since\\n the experiments are rather small-scale, it is difficult to make a\\n solid conclusion from them. Also, the proposed method seems to only\\n work with CD_1 (see Fig. 4), but the authors' explanation on\\n possible reasons for this is not clear. \\n\\n--- Actually we are mostly interested in CD_1 as it is the cheapest\\n way we know (with respect to time and memory storage) to evaluate\\n the correlations required at the learning stage. We must admit,\\n however, that the reason why it seems to perform worse in CD_k is\\n somewhat unclear to us. As we said above, we are working on the\\n mathematical grounds of the method and we hope to have a more\\n clear understanding of the fine details of the method in the near\\n future. \\n\\n- One thing which is clearly missing from the experiments is the\\n comparison of the proposed criterion against the actual function CD\\n minimizes ($E_D left[ log p (x^{(i)}) - log p(y^{(i)}) \\night]$,\\n where $y^{(i)}$ is the one-step Gibbs sample starting from\\n $x^{(i)}$.) as well as the approximate likelihood (the same thing,\\n but use samples from persistent chains as $y^{(i)}$'s). Computing\\n these things should be no more expensive than computing the proposed\\n criterion, and may give better indication of how log-likelihood\\n evolves over time. This will be interesting to see. \\n\\n--- We have some of these results, but we decided not to include them\\n in the paper to limit its extensions. In any case we have seen\\n that the behavior is similar to the reconstruction error, and thus\\n show the same problems once compared to the likelihood itself. In\\n the paper we only included 'somewhat positive' results, but we\\n agree with you that these results may be interesting to be\\n discussed. \\n\\n- Of course, one last thing is, why anyone would try to train an RBM\\n to be a good generative model (maximizing the log-likelihood) using\\n CD in the first place, while PCD (with possibly other MCMC\\n algorithms than Gibbs) is available.\\n\\n\\u00a0--- The paper is not trying to answer this question. We just want to\\n\\u00a0 \\u00a0 \\u00a0explore new ways to improve the stopping criteria in a learning\\n\\u00a0 \\u00a0 \\u00a0algorithm where the reconstruction error does not work properly\\n\\u00a0 \\u00a0 \\u00a0(in general), and we have tested it in CD_1. Since the\\n\\u00a0 \\u00a0 \\u00a0reconstruction error is not a good stopping criterion for PCD\\n\\u00a0 \\u00a0 \\u00a0either, PCD may also benefit from other options. It would be\\n\\u00a0 \\u00a0 \\u00a0interesting to see if the same stopping criteria work properly\\n\\u00a0 \\u00a0 \\u00a0with PCD, but that we leave for a future work.\\n\\n-Despite these seemingly negative comments, I enjoyed reading the\\n paper, since it is clearly written and well motivated. A few more\\n experiments showing that the proposed methods of choosing $y^{(i)}$\\n are better than other possible choices (e.g., using Gibbs sampling) \\n would make the paper much stronger. \\n\\n--- Thanks a lot for the comments and suggestions. We agree with you\\n that there is more room to improve, and we are already working on\\n that. But at this point we didn't want to miss the opportunity to\\n submit or recent results to this conference in order to discuss\\n our ideas with other people with lots of expertise in the\\n field. Still we hope to follow this line and to get more\\n exhaustive results in the near future.\\n\\n- P. S. I wouldn't mind, if you cited my paper (IJCNN 2013) next to\\n (Desjardins et al., 2010) where I also proposed PT. \\n\\n--- Consider it done\"}", "{\"review\": [\"Hi David,\", \"After briefly going through your paper, I have a number of comments on it.\", \"The proposed stopping criterion itself is not so novel, in my opinion. Essentially, it is a general framework in which, for instance, CD minimizes, where $y^{(i)}$ is obtained by a single-step Gibbs sampling from $x^{(i)}$. In fact, if $y^{(i)}$ is one step of persistent MCMC chain, the stopping criterion corresponds to the approximate maximum likelihood criterion used by PCD. In short, the proposed criterion is, in fact, what a proper MLE should minimize.\", \"The novelty is, however, in the proposed ways to choose $y^{(i)}$. The authors proposed two alternatives. Unfortunately (as somewhat expected) the first method of selecting random points does not work, which is only natural as if it did, why would anyone use MCMC to estimate the model statistics of RBM? :) The second alternative of using the visible samples conditioned on the *flipped* hidden states seems to work better, but the authors do not provide any good justification for this choice.\", \"The experiments show that the second choice is superior, but since the experiments are rather small-scale, it is difficult to make a solid conclusion from them. Also, the proposed method seems to only work with CD_1 (see Fig. 4), but the authors' explanation on possible reasons for this is not clear.\", \"One thing which is clearly missing from the experiments is the comparison of the proposed criterion against the actual function CD minimizes ($E_D left[ log p (x^{(i)}) - log p(y^{(i)})\", \"ight]$, where $y^{(i)}$ is the one-step Gibbs sample starting from $x^{(i)}$.) as well as the approximate likelihood (the same thing, but use samples from persistent chains as $y^{(i)}$'s). Computing these things should be no more expensive than computing the proposed criterion, and may give better indication of how log-likelihood evolves over time. This will be interesting to see.\", \"Of course, one last thing is, why anyone would try to train an RBM to be a good generative model (maximizing the log-likelihood) using CD in the first place, while PCD (with possibly other MCMC algorithms than Gibbs) is available.\", \"Despite these seemingly negative comments, I enjoyed reading the paper, since it is clearly written and well motivated. A few more experiments showing that the proposed methods of choosing $y^{(i)}$ are better than other possible choices (e.g., using Gibbs sampling) would make the paper much stronger.\", \"Cho\", \"P. S. I wouldn't mind, if you cited my paper (IJCNN 2013) next to (Desjardins et al., 2010) where I also proposed PT. :)\"]}", "{\"title\": \"review of Stopping Criteria in Contrastive Divergence: Alternatives to the Reconstruction Error\", \"review\": \"Dear authors,\\n\\nLet me reveal my identity before I continue. I'm Kyunghyun Cho who wrote the earlier review (by an unexpected coincidence). The previous comments reflect what I wanted/want to say as an official reviewer, and I will not write another separate review. \\n\\nInstead, let me answer briefly to the response to my review from the authors:\\n\\n(Authors) 'If you change the value of the hidden units in any way, this delicate equilibrium is broken and you expect to get much lower probabilities for the visible variables.' \\n\\n=> I cannot agree with this. Intuitively, if we believe that training makes latent variables learn potentially lower-dimensional manifold on which training samples lie, any small (or even large) change in the latent representation shouldn't correspond to a change in the input space that moves the point away from the manifold. I'd agree with your argument much more, if you were flipping all the bits of the input variable.\\n\\n(Authors) 'CD_1 as it is the cheapest way we know'\\n\\n=> Computationally, I don't see why CD-1 should be cheaper than PCD. Memory-wise, if you follow the usual practice of maintaining only a few persistent samples, it shouldn't matter too much as well. Though, I agree that it's easier to train an RBM with CD-1 using a much higher learning rate, which may make learning progress faster.\\n\\n- Cho\"}", "{\"reply\": \"> On the theoretical side, the proposed criterion (eq. 8) is only\\n > heuristically motivated. Note\\n > in particular that a model might be able to reach an arbitrary high \\nvalue of this criterion if\\n > P(y_i) -> 0 for some y_i, regardless or how badly it may perform on \\nthe training x_i's.\\n\\nIn our mind we implicitely assume CD_1 is leading to the right place \\nstarting from\\nweights initialized at random. if that does not hold, it would lead to \\ntrouble no matter\\nwhat the stopping criteria you use.\\nunder our assumptions, the situation you point out should not happen, \\nhopefully.\\nOur experiments seem to back up our line of reasoning, although we agree \\nthe criteria should\", \"be_contrasted_against_other_problems\": \")\\n\\n > I'm also not sure why one would\\n > necessarily take N y_i's: why not generalize it to M y_i's, using Pi_i\\n > P(y_i)^(N/M), so as to be able to choose the sample size based on \\navailable\\n > computational resources?\\n\\nWe don't say that N_i should be equal to M_i: we just use the proposed \\nestimator,\\nbut of course it could happen that enhanced versions of it leads to \\nbetter results.\\nStill, we are confident the proposed criteria works well.\\n\\n > Finally, I am not convinced by the choice of y_i's:\\n > first, the sampling rules (random h or '1 - h_i') are not well \\nmotivated (there\\n > is no guarantee that we will not sample training points)\\n\\nOur belief is that at the very beginning of the training process the \\nproposed\\nsampling does lead to states that are far away from the training set, \\njust because\\nthe RBM has been initialized to random weights and bias, which are \\n*very* different\\nfrom the ones one will get at the optimal point of the learning stage.\\nof course the criteria gets worse when you surpass that point. But that\", \"is_precisely_our_main_message\": \"this does not happen while the system is learning and weights are still \\nnon-optimal.\\n\\n > second, since they are defined from the model parameters they lead to \\na set of\\n > y_i's that evolves during training (thus the criterion may be unstable)\\n\\nWe agree that on general grounds, any procedure that changes with model \\nparameters could be\\ndynamically unstable, even standard CD_k could suffer from this problem.\\nHowever we have not seen anything like that in the problems analyzed.\\nIn the present case, the size of the spaces studied, despite large, are \\nsmall enough\\nto compute the likelihood and to exhaustively explore all possible \\nstates. While\\ndoing so, we have not encountered any of the problems mentioned here.\\n\\n > third the y_i's are\\n > expectations and there is no explanation on whether it makes sense \\nfor binary\\n > RBMs.\\n\\nWe have seen that using average values reduces noise. That makes the \\nalgorithm more stable,\\nalthough not using expectations leads to the same statistical solutions.\\n\\n > On the empirical side, my first concern is that experiments are performed\\n > on low-dimensional toy datasets, and there is nothing to tell us that\\n > behavior observed on such datasets will actually translate into higher\\n > dimensional tasks. In particular, sampling-based methods tend to \\nbehave rather\\n > nicely in low dimension, but may break horribly as the dimension \\nincreases...\\n\\nWe are aware that we have to do a more exhaustive study on the scaling \\nproperties\\nof the method, and we are currently working on that. However, we know \\nstochastic sampling techniques\\nare best suited for large scale problems. In fcat, when the \\ndimensionality of the space increases,\\nstochastic methods are essentially the only ones that provide reliable \\nresults on a general\\nground. For instance, in numerical simulations of quantum many-body \\nsystems of strongly\\ninteracting partcicles, Monte Carlo methods are known to be the only \\nones that are able to provide\\nexact statistical solutions to the Schrodinger equation. We are \\nconfident the same applies in the\\npresent case.\\n\\n > Thus it would have been good to add experiments in high dimension,\\n > for instanceusing AIS to estimate the partition function.\\n\\nWe agree that it is interesting to explore the scalability with the \\nsystem size. However\\nin order to check against exact results (computation of the likelihood) \\none is restricted\\nto medium or small spaces. Furthermore we are also aware that AIS may \\nalso fail\\nin some cases (we already mention that in our paper and refer to \\nreference Schult et al. 2010).\\n\\n > My second concern is that only\\n > training errors are reported: although they are definitely interesting to\\n > monitor, someone using reconstruction error as a stopping criterion \\nwill always\\n > use a validation set for this, and will hope to stop at a point where \\nvalidation\\n > log-likelihood is maximized. The comparisons in the paper are thus, \\nfor the\\n > most part, uninformative, since they only use the training data.\\n\\nIn general we agree that using part of the examples as a training set \\nand the rest as a\\nvalidation set is a good way to proceed. However in the studied data \\nsets that separation\\nwould lead to significant information loss that may lead to wrong results\\n(for instance in the bars and stripes problem one'd better show the \\nnetwork all\\npossible instances).\\n\\n > A few more minor points:\\n > - Eq. 5 is missing some characters\\n > - The number of hidden units is not mentioned in the experiments\\n > - Plots show 'reconstruction error' as something that is better when \\nit increases,\\n > which is counter-intuitive for an error\\n > - Something potentially worth discussing is that RBMs are often used for\\n > pre-training purpose in deep networks, and it is not clear that better\\n > likelihood => better pre-training (if there is work on this topic, it \\nshould\\n > be cited, as it is important to motivate this direction of research)\\n > - Another application worth mentioning to this kind of technique is model\\n > selection (which RBM is best?) => the proposed criterion may require \\na bit\\n > of tweaking to answer this kind of question (common y_i's are needed)\\n\\nWe thank for these suggestions, and will try to accomodate them in the \\nnext revision.\"}" ] }
zzM42D6twOztS
Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
[ "Mathias Berglund", "Tapani Raiko" ]
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different variances for stochastic gradient estimates of individual samples. In this paper we show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the sum of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
[ "contrastive divergence", "pcd", "persistent contrastive divergence", "popular methods", "weights", "restricted boltzmann machines" ]
submitted, no decision
https://openreview.net/pdf?id=zzM42D6twOztS
https://openreview.net/forum?id=zzM42D6twOztS
ICLR.cc/2014/workshop
2014
{ "note_id": [ "FFW7YqOZd2FC0", "adiPdjpKvR56T", "zzq5dAvF5ndg4", "FsLVFk86XIY5D" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1392658680000, 1391848860000, 1392068280000, 1391972400000 ], "note_signatures": [ [ "Mathias Berglund" ], [ "anonymous reviewer 9c34" ], [ "Mathias Berglund" ], [ "anonymous reviewer 11c9" ] ], "structured_content_str": [ "{\"review\": \"The revised version of the paper has now been published. Thank you for all the helpful comments.\\n\\nAs an additional comment, please note that we are not measuring the variance of the average of the estimates obtained with M independent chains (i.e. we use a minibatch size of 1), since the variance of estimates obtained with averaging (i.e. using a minibatch size of M>1) is easy to compute from the case of a minibatch size of 1, given that the different estimates are independently sampled.\"}", "{\"title\": \"review of Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence\", \"review\": \"This paper presents an empirical study of the variance in gradient estimates between contrastive divergence (CD) and persistent contrastive divergence (PCD). It is well known that PCD tends to be less stable than CD, requiring a larger learning rate and larger mini-batch sizes. The paper does a fairly good job of empirically verifying this phenomenon on several image datasets, and most of the results are consistent with expectations. The observation that the variance increases toward the end of learning is an interesting and not entirely obvious finding.\", \"one_issue_though_is_that_the_paper_seems_to_miss_a_crucial_part_of_the_story\": \"CD learning enjoys a low variance at the cost of an increase in bias. It is easy to construct a gradient estimate that exhibits zero variance, however practically speaking this would not be very useful. What is more interesting is the trade-off between bias and variance. For example, PCD exhibits significant variance on the silhouettes dataset. Does this mean that it requires an impractically small learning rate?\\n\\nIt has been shown in the past that the technique of iterate averaging can be used to remove much of the variance in PCD learning, but that it does not work nearly as well when applied to CD [1]. The fact that PCD is asymptotically unbiased, but exhibits high variance compared to CD supports these results.\\n \\n[2] should be cited for PCD as well.\", \"references\": \"[1] Kevin Swersky, Bo Chen, Benjamin Marlin, and Nando de Freitas, \\u201cA Tutorial on Stochastic Approximation Algorithms for Training Restricted Boltzmann Machines and Deep Belief Nets,\\u201d Information Theory and Applications Workshop, 2010.\\n\\n[2] Laurent Younes, \\u201cParametric inference for imperfectly observed Gibbsian \\ufb01elds,\\u201d Probability Theory and Related Fields, vol. 82, no. 4, pp. 625\\u2013645, 1989.\"}", "{\"review\": \"Reply to both reviewers:\\n\\nThank you for the extensive and helpful comments. As the bias of CD is quite well documented while the variance of PCD vs. CD is less so, the paper intentionally does not focus on the bias. We should however make this more clear in the introduction.\\n\\nAlthough it would be interesting to study the bias / variance trade-off in all the training settings in the paper, we still saw that there was value in documenting the variance also stand-alone in order to give sense of the magnitude of the differences in variance between PCD and CD. We would therefore still argue that documenting only the variance has value, although we agree that it would be meaningful to explore the bias/variance trade-off in a more extended discussion on the topic.\\n\\nWe will submit a revised version of the paper based on the comments as soon as possible.\", \"reply_to_anonymous_9c34\": \"Thank you for the references, we should mention iterate averaging as a method to alleviate the high variance for PCD. Thank you also for the second PCD reference.\", \"reply_to_anonymous_11c9\": \"Thank you for the comments. We fear that the reviewer 'Anonymous 11c9' is dubious about the experimental setting in Figure 2 due to a misunderstanding, but we hope to clear things up in our response (see below), and we hope to make that part much clearer in the next revision.\\n\\nRegarding the request for clarification, the Figures 1 and 2 show quite different results, which we realize can be misleading and gives an impression of a large asymmetry between CD and PCD. In Figure 1, the x-axis depicts the number of CD steps for the *same* data point, where only the number of steps in the CD sampling increases. Therefore, following one of the lines in Figure 1 along the x-axis, we are comparing the same update starting from the same data point, but with a longer and longer chain for the negative phase. The figure is hence what would be expected from a typical figure comparing different values of k for CD-k.\\n\\nHowever, in Figure 2, we are looking at the variance of the *mean* (or sum, see below) of subsequent estimates. Differing from Figure 1, in Figure 2 we are summing up subsequent estimates along the x-axis, which means that the further we go along the x-axis, the more estimates we have included. The reason for doing so is that the high variance of PCD is hypothesized to stem from subsequent negative phase estimates to be dependent.\\n\\nPlease also note that in Figure 2, the PCD variance is divided by the variance of the sum of k estimates using \\u201cexact\\u201d sampling \\u2013 which means that the figure is identical to taking the mean of subsequent gradient estimates and compare them to the mean of estimates with \\u201cexact\\u201d sampling. Therefore, for a chain that mixes well, this relative variance should not increase with summing more steps during training. This we can also see in Figure 2, where the variance for MNIST and CIFAR in the beginning of the training are very close to \\u201cexact\\u201d sampling when summing 1-20 subsequent steps (the horizontal lines in Figure 2). However, we realize that the text would be clearer if we used the word mean instead of sum, and we will revise the text accordingly. The pseudocode for Figure 2 is presented below.\\n\\nRegarding the baseline, we ran M >> 1 independent chains for 1000 steps, i.e. exactly aimed at running a large number of independent chains until convergence. Although we have not tried to validate whether 1000 steps is enough, we have simply assumed that 1000 steps is enough for approximate convergence.\\n\\nRegarding evaluating PCD variance on a model trained via CD we agree that the most reliable results would be obtained if we trained the model with e.g. enhanced gradient and parallel tempering instead of CD.\\n\\nRegarding the I-CD experiments, the 10 gradient estimates were run by initializing the Markov chain from a random training sample (which was different in all the 10 runs), but also different from the training sample used for the positive phase. Although we agree that the result of higher I-CD variance compared to CD is trivial, we still found the magnitude of variance of I-CD relevant to display. If for instance the variance of I-CD was very similar to that of CD (but much less than the \\u201cexact\\u201d estimate), the low variance of CD could be explained by the fact that we run the chain very few steps from *any* data point. However, we agree that the text should be changed to state this more clearly.\\n\\nThank you also for the clarity comments, as you assumed, they were both indeed errors in the text.\", \"pseudocode_for_figure_2\": \"do for each data point in data set {\\n use the data point for positive phase\\n run negative particle sampling for 1000 steps from random data point\\n initialize gradient_sum to zero\\n \\n do 20 times {\\n calculate gradient estimate using current positive and negative particle\\n add gradient estimate to gradient_sum\\n store sufficient statistics of the gradient_sum\\n pick new random data point for positive phase\\n run the negative particle chain one step forward (independent of positive phase)\\n }\\n}\\n\\ndo for each data point in data set {\\n use the data point for positive phase\\n run negative particle sampling for 1000 steps from random data point\\n initialize gradient_sum_exact to zero\\n \\n do 20 times {\\n calculate gradient estimate using current positive and negative particle\\n add gradient estimate to gradient_sum_exact\\n store sufficient statistics of the gradient_sum_exact\\n pick new random data point for positive phase\\n run negative particle sampling for 1000 steps from random data point\\n }\\n}\\n\\ncompute the sum of componentwise variances from the statistics of the gradient_sum for each of the 20 steps separately\\ncompute the sum of componentwise variances from the statistics of the gradient_sum_exact for each of the 20 steps separately\\ndivide the first sum above with the second sum above for each of the 20 steps separately\"}", "{\"title\": \"review of Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence\", \"review\": \"This paper provides an empirical evaluation of the variance of maximum likelihood gradient estimators for RBMs, comparing Contrastive Divergence to Persistent CD. The results confirm a well known belief that PCD suffers from higher-variance, than the biased CD gradient. While the result may not be surprising, I believe the authors are correct in stating that the issue had not properly been investigated in the literature. It is unfortunate however that the authors avoid the much more important question of trade-off between bias and variance. Before making a final judgement on the paper however, I would ask that the authors clarify the following potential major issue. Other more general feedback for improving the paper follows.\", \"request_for_clarification\": \"Why the asymmetry between the estimation of CD-k vs PCD-k gradients ? The use of PCD-k is highly unusual. If the goal was to study variance as a function of the ergodicity of a single Markov chain, then PCD-k gradients should have been computed with a single training example (for the positive phase) and computing the negative phase gradient by *averaging* over the k-steps of the negative phase chain.\\n\\nCould the authors clarify (through pseudocode) how the gradients and their variance are computed for the experiments of Figure 2?\\n\\nDue to the loss of ergodicity of the Markov chain, the effective number of samples used to estimate the model expectation should indeed be larger at 10 epochs, than at 500 epochs. It is thus predictable that variance of the gradient estimates would increase during training. However, this is for a fixed value of k. I find very strange that the variance would *increase* with k (at a fixed point of training). I am left wondering if this is an artefact of the experimental protocol: the authors seem to be computing the variance of the *sum* of k-gradient estimates. This quantity will indeed grow with k, and will do so linearly if the estimates at each k are assumed to be independent. The linearity of the curves in Fig.2 gives some weight to this hypothesis.\", \"other_general_feedback\": [\"One area of concern is that the paper evaluates PCD in a regime which is not commonly used in practice: i.e. estimating the negative phase expectation via the correlated samples of a single Markov chain. I worry that some readers may conclude that PCD is not viable, due to its excessively large variance. For this reason, I think the paper would benefit from repeating the experiments but averaging over M independent chains.\", \"A perhaps more appropriate baseline, would be to run M >> 1 independent Markov chains to convergence and average the resulting gradient estimates. This might not change much, but the above would yield a better estimate of the ML gradient than CD-1000.\", \"Evaluating the variance of PCD gradients on a model trained via CD may be problematic. The mixing issues of PCD can be exacerbated when run on a CD-trained model, where the energy has only been fit locally around training data (Desjardins, 2010). While I do not expect the conclusions to change, I would be interested in seeing the same results on a PCD-k trained model.\", \"RE: I-CD experiments. 'This supports the hypothesis that the low variance of CD [stems from] the negative particle [being] sampled from the positive particle, and not from that the negative particle is sampled only a limited number of steps from an arbitrary data point'.\", \"I am not sure that the experiment allows you to draw this conclusion. When computing the 10 gradient estimates (for each training example) did you initialize the Markov chain from a random (but fixed throughout the 10 gradient evaluations) training example ? Otherwise, I believe the conclusion is rather uninteresting and doesn't shed light on the 'importance' of initializing the negative chain from the positive phase training data. In CD-training, the only variance stems from the trajectory taken by the (short) Markov chain from a fixed starting point. In I-CD, there are two sources of variance: (1) the trajectory of the chain, and (2) the starting point of the chain. If the chain is initialized randomly for the 10 gradient evaluations, then this will undoubtedly increase the variance of the estimator (but with lower bias).\"], \"clarity\": [\"In I-CD, the 'negative particle is sampled from a random positive particle' ? I would make explicit that you initialize the chain of I-CD from a random training example. In Section 4, 'arbitrary data point' left me wondering if you were instead initializing the chain from an independent pseudo-sample of the model (using i.e. a uniform distribution or a factorial approximation to p(v)).\", \"'Conversely, the variance of the mean of subsequent variance estimates using PCD is significantly higher' ? Did the authors mean 'the variance of the mean of subsequent gradient estimates' ? Otherwise, please consider rephrasing.\"]}" ] }
eOP7egJ1wveRW
Learning Factored Representations in a Deep Mixture of Experts
[ "David Eigen", "Marc'Aurelio Ranzato", "Ilya Sutskever" ]
Mixtures of Experts combine the outputs of several 'expert' networks, each of which specializes in a different part of the input space. This is achieved by training a 'gating' network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ('where') experts at the first layer, and class-specific ('what') experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.
[ "experts", "deep mixture", "factored representations", "input", "experts mixtures", "outputs", "several", "networks", "different part", "input space" ]
submitted, no decision
https://openreview.net/pdf?id=eOP7egJ1wveRW
https://openreview.net/forum?id=eOP7egJ1wveRW
ICLR.cc/2014/workshop
2014
{ "note_id": [ "--5uYip1KdY1B", "OOxLKAd6LBO_C", "ccU6RwPFLaROG", "3fVm9U8jmI9ZW", "__vZdXgmZXdMz", "T29y23Xay3UVQ", "xxuVPAmBVc4BE", "ccXQi_g3QhiMR" ], "note_type": [ "review", "review", "comment", "review", "comment", "review", "comment", "comment" ], "note_created": [ 1391843280000, 1391636100000, 1392878160000, 1389855180000, 1392877860000, 1391016120000, 1392877980000, 1392879180000 ], "note_signatures": [ [ "anonymous reviewer 3af9" ], [ "Liangliang Cao" ], [ "David Eigen" ], [ "anonymous reviewer 4f75" ], [ "David Eigen" ], [ "anonymous reviewer c87d" ], [ "David Eigen" ], [ "David Eigen" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Factored Representations in a Deep Mixture of Experts\", \"review\": \"The paper introduce a deep mixture of experts model which contains multiple layers each of them contains multiple experts and a gating network. The idea is nice and the presentation is clear but the experiments lack proper, needed, comparisons with baseline systems for the Jittered MNIST and the monophone speech datasets.\\n\\nAs the authors mentioned in conclusion, the experiments use all experts for all data points which doesn\\u2019t achieve the main purpose of the papers, i.e. faster training and testing. It is important to show how does this system perform against a deep NN baseline with the same number of parameters in terms of accuracy and training time per epoch.\\nRegarding the speech task. What is the error you are presenting in Table 2, is it the Phone or Frame error rate?\"}", "{\"review\": \"I am interested in the topic of this paper but my impression after reading is still that deep MOE is hard to train and we need to know a number of tricks including constrained training and fine tuning. I would expect it would be harder to train deeper models (say, 4 or 5 layers)\", \"several_suggestions\": [\"About experimental comparison. If I understand correctly, Table 1 and 2 only compare performances from several configurations of 2-layer MOE. I would be interesting to see how much better compared with basic MOE (1-layer).\", \"About Jordan and Jacob's HMOE. Section 2 reviews the differences between DMOE and HMOE. Which model is more scalable? I am curious about the comparison with HMOE on both accuracy and speed.\", \"Training + testing accuracy. I like that the current submission conveys more information by reporting the performance of training and testing. However, it will be even more interesting to report the curves of two errors during SGD training. Also I am a little confused: are you using a validation set with SGD? How is the performance on validation set during training?\"]}", "{\"reply\": \"Thank you for your comments and suggestions. We now include DNN baselines for Jittered MNIST. In response to your question re: which error rate, it is the phone error. This has been updated in the new version as well.\"}", "{\"title\": \"review of Learning Factored Representations in a Deep Mixture of Experts\", \"review\": \"This paper extends the mixture-of-experts (MoE) model by stacking several blocks of the MoEs to form a deep MoE. In this model, each mixture weight is implemented with a gating network. The mixtures at each block is different. The whole deep MoE is trained jointly using the stochastic gradient descent algorithm. The motivation of the work is to reduce the decoding time by exploiting the structure imposed in the MoE model. The model was evaluated on the MNIST and speech monophone classification tasks.\\n\\nThe idea of deep MoE is interesting and, although not difficult to come out, is novel. I found the fact that the first and second blocks focus on distinguishing different patterns is particularly interesting. \\n\\nHowever, I feel that the effectiveness and the benefit of the model is not supported by the evidence presented in the paper. \\n\\n1.\\tIt\\u2019s not clear how or whether the proposed deep MoE can beat the fully connected normal DNNs if the same number of the model parameters are used (or even when deep MoEs use more parameters). A comparison against the fully connected DNN on the two tasks is needed. In many cases we don\\u2019t want to sacrifice accuracy for small speed improvement. \\n2.\\tIt\\u2019s not clear whether the claimed computation reduction is true. It would be desirable if a comparison on the computation cost between the deep MoE and the fully connected conventional DNN is provided when both the number of classes is small (say 10) and large (say 1K-10K). The comparison should also consider the fact that the sparseness pattern in the deep MoE is random and unknown beforehand may not save computation at all when SIMD instructions are used.\\n3.\\tIt is also unclear whether deep MoE performs better than the single-block MoE. It appears to me, according to the results presented, the deep MoE actually performs worse. The concatenation trick improved the result on the MNIST. However, from my experience, the gain is more likely from the concatenation of the hidden features instead of the deep architecture used.\\n\\nThere is also a minor presentation issue. The models on row 2 and 3 are identical in Table 2 but the results are different. What is the difference between these two models?\"}", "{\"reply\": \"Thank you for your review. Responding to your various points:\\n\\n'A comparison against the fully connected DNN on the two tasks is needed'\\n\\nFor Jittered MNIST, we ran these baselines and are including the results. For Monophone Speech, there are unfortunately some IP issues that prevent us from running this now -- however, there are still the second layer single-expert and concatenated-experts baselines.\\n\\n\\n'It\\u2019s not clear whether the claimed computation reduction is true'\\n\\nIn this work, we use the all-experts mixture and have no computational reductions yet. We feel the fact that the model factorizes is a promising result in this direction, however. This was explained in the discussion, but we will be more explicit about this in the introduction as well.\\n\\n\\n'The concatenation trick improved the result on the MNIST.'\\n\\nThis concatenation was actually intended as a baseline target that the mixture should not be able to beat, since it concatenates the experts' outputs instead of superimposing them (this also increases the number of parameters in the final softmax layer). We demonstrate that the DMoE falls in between this and the single-expert baseline -- it is best to be as close as possible to the concatenated experts bound. This is explained at the bottom of page 3.\\n\\n\\n'The models on row 2 and 3 are identical in Table 2 but the results are different'\\n\\nThanks for pointing this out; these used two different sized gating networks at the second mixture layer (50 and 20 hiddens). We now include all the gating network sizes in these tables.\"}", "{\"title\": \"review of Learning Factored Representations in a Deep Mixture of Experts\", \"review\": \"The paper extends the concept of mixtures of experts to multiple layers of experts. Well, at least in theory - in practise authors stopped their experiments at only two such layers - which somehow invalidates the use of buzzy 'deep' word in the title - does more (than two) layers of mixtures still help?\\n\\nThe clue idea is to collaboratively optimise different sub-networks representing either experts or gating networks. Authors propose also 'the trick' to effectively learn mixing networks by preserving too rapid selection of dominant experts at the beginning of training stage.\\n\\nIt's hard to deduce whether presented idea gives a real advantage over, for example, usual -- one or two hidden layers feed-forward networks with the same total number of parameters. Perhaps, I am also missing something important here -- but was there any good reason for using the Jittered MNIST instead of the MNIST itself? In the end both are just toy benchmarks while the latter gives you the ability to cite and compare your work to other many other reported results. If you did that, not doing some basic baselines by yourself would be OK.\\n\\nI've got similar comments to the monophone voice classification. On the top I've already written for MNIST I do not see the need to use simplified proprietary database. It would be better to do the experiments in TIMIT benchmark and then cite other works that reports frame accuracy (where a single frame is a monophone) so the reader could get a bit wider picture of how your work fits into broader perspective.\\n\\nAnyway, idea is sufficiently novel and interesting and I am in favour of accept. Perhaps the authors could at least improve MNIST experimental aspect.\"}", "{\"reply\": \"Thank you for your comments. In response to your questions:\\n\\n'does more (than two) layers of mixtures still help'\\n\\nWe did not try more than two layers yet.\\n\\n\\n'reason for using the Jittered MNIST instead of the MNIST itself'\\n\\nJittering places digits at different spatial locations, which the first layer learns to factor out. By jittering the dataset ourselves, we can explicitly measure this effect, as shown in Fig 2.\"}", "{\"reply\": \"Thanks for your comments.\\n\\nTables 1 and 2 have comparisons for 1-layer MoE on the last lines of each table.\", \"re\": \"scalability: We are currently using an all-experts mixture in this work, so haven't realized any computational gains yet. However, the fact that the DMoE factorizes is an interesting result that we think is a promising step towards partitioning these networks efficiently.\\n\\nFor training curves, the paper is somewhat packed as it is, and already twice the recommended length for workshop submissions, so it seems infeasible to include these. The reported results are on the final train/test split using fixed numbers of epochs validated beforehand.\"}" ] }
ssDPnHvkedao6
Learning Semantic Script Knowledge with Event Embeddings
[ "Ashutosh Modi", "ivan titov" ]
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from unlabeled texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods.
[ "semantic script knowledge", "representations", "event embeddings", "event embeddings induction", "common sense knowledge", "prototypical sequences", "events", "much attention", "knowledge", "form" ]
submitted, no decision
https://openreview.net/pdf?id=ssDPnHvkedao6
https://openreview.net/forum?id=ssDPnHvkedao6
ICLR.cc/2014/workshop
2014
{ "note_id": [ "CdfIWqqXIkdT-", "vvxMv-uZQWDNr", "IhlvUSBbaVUbQ", "OV0_ZIkXpHOXu", "UiBqvVVsPEv5t", "27T0Aaudf37di", "_9Q-_PbJ6d95N" ], "note_type": [ "comment", "review", "review", "review", "comment", "review", "review" ], "note_created": [ 1392063900000, 1391516760000, 1391787960000, 1392734280000, 1392063900000, 1392069840000, 1390323900000 ], "note_signatures": [ [ "Ashutosh Modi" ], [ "anonymous reviewer b099" ], [ "anonymous reviewer 60ec" ], [ "Ashutosh Modi" ], [ "Ashutosh Modi" ], [ "Ashutosh Modi" ], [ "Ashutosh Modi" ] ], "structured_content_str": [ "{\"reply\": \"-- Why call it 'unlabeled'?: We used unlabeled in the sense that we used texts without any kind of semantic annotation on top of them. But we agree that this is confusing, especially given that the texts were written by Amazon turkers specifically for this task. We will edit the paper accordingly.\\n-- Representing more complex sentences?: We will clarify this point in the paper. The example 'fill water in coffee maker' contains 2 phrases as arguments ( 'water' and 'in coffee maker'), we use their 'lexical' heads (i.e. 'water' and 'maker') only. The embeddings of these two words and of the predicate ('fill') are then used as the input to the hidden layer (see Figure 2). The same procedure is used for predicates with more than 2 arguments (just more arguments are used as inputs to the hidden layer). In other words, we use a bag-of-arguments model. \\n-- Too much related work: Given that there have been much work on this or related task in NLP, we believe that we should explain how our approach (and the general representation learning framework) is different. We would prefer not to shorten this section.\"}", "{\"title\": \"review of Learning Semantic Script Knowledge with Event Embeddings\", \"review\": \"The authors propose a model that takes a set of events (written as English text) as input, and outputs the temporal ordering of those events. As opposed to a previous DAG based method (also used as a baseline here), in this work words are represented as vectors (initialized with Collobert's SENNA embeddings) and are input into a two layer neural net whose output is also a vector embedding. The output is then taken as input to an online ranking model (PRank) and the whole thing (including the word vectors) are trained using backprop. A dataset containing short sequences of events (e.g. the process of making coffee) gathered for previous work using MTurk is used for train and test. The proposed embedding method shows a substantial improvement over the DAG baseline.\\n\\nThis is interesting work, I thought the execution was good, and the results are impressive. I only have a few suggestions/questions: first, why, in the abstract and elsewhere, do you claim to be using unlabeled data? The data is labeled by order of events (by MTurkers), is it not? I suspect that you mean that no further labeling was done, but this is confusing. Second, your model (Fig. 1) shows one predicate (i.e. verb) and two arguments (i.e. nouns), but some of the examples from the ESD data are more complex (e.g. 'fill water in coffee maker'). How are these more complex phrases mapped to your model? Finally, you use a lot of space on previous work; I think that the paper would be improved by adding more details on your method, and shortening the previous work sections (1 and 2.1) by better focusing them.\", \"a_minor_issue\": \"at the end of Section 1, some unnecessary extra space has been inserted.\"}", "{\"title\": \"review of Learning Semantic Script Knowledge with Event Embeddings\", \"review\": \"This paper investigates a model which aims at predicting the order of\\nevents; each event is an english sentence. While previous methods\\nrelied on a graph representation to infer the right order, the\\nproposed model is made of two stages. The first stage use a continuous\\nrepresentation of a verb frame, where the predicate and its arguments\\nare represented by their word embeddings. A neural network is used to\\nderive this continuous representation in order to capture the\\ncompositionality within the verb frame. The second stage uses a large\\nmargin extension of PRank. The learning scheme is very interesting:\\nthe error made by the ranker is used to update the ranker parameters,\\nbut is also back-propagated to update the NN parameters. \\n\\nThis paper is well written and describes a nice idea to solve a\\ndifficult problem. The experimental setup is convincing (including the\\ndescription of the task and how the learning resources were built). I\\nonly have a few suggestions/questions.\\n\\nFor a conference that is focused on representation learning, it could\\nbe interesting to discuss whether the word embeddings provided by\\nSENNA need to be updated. For instance, the authors could compare\\ntheir performances to a system where the initial word embeddings are\\nfixed. Moreover, the evaluation metric is F1, but how the objective\\nfunction is related to this metric. Maybe a footnote could say a few\\nwords about that and I'm curious to see how the objective function\\nevolves during training. The ranking error function is quite similar\\nto metrics used in MT for reordering evaluation (see for instance the\\nwork of Alexandra Birch in 2009).\"}", "{\"review\": \"We have submitted a new version of the paper. We made the changes we promised above.\"}", "{\"reply\": \"-- Keeping embeddings fixed to the ones produced by SENNA: We have just tried doing this, and obtained about the same results (slightly better: 84.3 average F1 vs. 84.1 F1 in the paper). In theory, keeping them fixed may not be a good idea, as learning in the (mostly) language modeling context (as in SENNA) tends to assign similar representations to antonyms/opposites (e.g., open and close). And the opposites tend to appear at different positions in event sequences. However, the fact the results are similar may suggest that our dataset is not large enough to learn meaningful refinements. Perhaps, using SENNA embeddings to define an informative prior on the representation would be a better idea but we will leave this for future work. We will add a footnote mentioning the above experiment.\\n\\n-- F1 vs. accuracy: The binary classification problem is fairly balanced, so we would not expect much of a difference between accuracy (which we essentially optimize) and F1; we chose to use the same metric in evaluation as considered in the previous work. \\n\\n-- We are not familiar with the reordering metric used in Birch et al. (2009), thanks for the pointer.\"}", "{\"review\": \"We thank both reviewers for the comments. See our feedback above. We will upload the revised version by the end of the week.\"}", "{\"review\": \"Corrected a typo in the paper and updated version is available at : http://arxiv.org/abs/1312.5198v2\"}" ] }
6dukdvBcxn6cR
Learning States Representations in POMDP
[ "Gabriella Contardo", "Ludovic Denoyer", "Thierry Artieres", "patrick gallinari" ]
We propose to deal with sequential processes where only partial observations are available by learning a latent representation space on which policies may be accurately learned.
[ "states representations", "pomdp", "sequential processes", "partial observations", "available", "latent representation space", "policies" ]
submitted, no decision
https://openreview.net/pdf?id=6dukdvBcxn6cR
https://openreview.net/forum?id=6dukdvBcxn6cR
ICLR.cc/2014/workshop
2014
{ "note_id": [ "GGBm_ztp7nyT5", "xheJhouLQlYLp" ], "note_type": [ "review", "review" ], "note_created": [ 1391443260000, 1391638440000 ], "note_signatures": [ [ "anonymous reviewer 2349" ], [ "Ludovic Denoyer" ] ], "structured_content_str": [ "{\"title\": \"review of Learning States Representations in POMDP\", \"review\": \"Learning States Representations in POMDP\\nGabriella Contardo, Ludovic Denoyer, Thierry Artieres, Patrick Gallinari\", \"summary\": \"The authors present a model that learns representations of sequential inputs on random trajectories through the state space, then feed those into a reinforcement learner, to deal with partially observable environments. They apply this to a POMDP mountain car problem, where the velocity of the car is not visible but has to be inferred from successive observations.\", \"comments\": \"Previous work has solved more difficult versions of the POMDP mountain car problem, where the input was raw vision as opposed to the very low-dimensional state space of the authors. Please discuss in the context of the present approach:\\n\\nG. Cuccu, M. Luciw, J. Schmidhuber, F. Gomez. Intrinsically Motivated Evolutionary Search for Vision-Based Reinforcement Learning. In Proc. Joint IEEE International Conference on Development and Learning (ICDL) and on Epigenetic Robotics (ICDL-EpiRob 2011), Frankfurt, 2011.\", \"from_the_abstract\": \"'The method is successfully demonstrated on a vision-based version of the well-known mountain car benchmark, where controllers receive only single high-dimensional visual images of the environment, from a third-person perspective, instead of the standard two-dimensional state vector which includes information about velocity.'\", \"sec_4\": \"'For example (Gissl\\u0013en et al., 2011) proposed to learn reprentations with an auto-associative model with a fi\\nxed-size history'\\n\\nThis is not accurate - the representation of (Gissl\\u0013en et al., 2011) had fixed size, but in principle the history could have arbitrary depth, because they used a RAAM like Pollack's (NIPS 1989) as unsupervised sequence compressor:\\n\\nJ. B. Pollack. Implications of Recursive Distributed Representations. Advances in Neural Information Processing Systems I, NIPS, 527-536, 1989.\\n\\nOf course, RNN for POMPD RL have been around since 1990 - please discuss differences to the approach of the authors:\\n\\nJ. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Proc. IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 2, pages 253-258, 1990.\", \"one_should_probably_also_discuss_recent_results_with_huge_rnn_for_vision_based_pomdp_rl\": \"J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, 2013.\", \"general_recommendation\": \"It is not quite clear to this reviewer how this work goes beyond the previous work mentioned above. At the very least, the authors should make the differences very clear.\"}", "{\"review\": \"We first thank the reviewer for the comment.\\n\\nConcerning the fact that most difficult versions of mountain car have been already solved, we perfectly agree. In our paper, we are using a very simple version of mountain car in order to demonstrate the ability of our approach to extract hidden information from observations and from the dynamicity of the system. More difficult versions of mountain car and of other complex reinforcement learning tasks are under investigation, and we plan to present these experiments in a full paper in the next months.\\n\\nSince the paper size was restricted to 3 pages in the call for papers, we have focused on the relation of our model with the closest/lastest models of the literature and thus we agree that this submission lacks some important references. We will submit quickly a longer version of the paper discussing the differences between our approach and other existing methods that are not described in the current version.\", \"the_three_papers_cited_by_the_reviewer_propose_to_tackle_a_control_problem_where_the_reward_function_is_known\": \"the recurrent neural networks are used as controllers for the task to solve, and thus are able to extract an hidden representation that depends on the task. Our approach is unsupervised and learn representations using randomly chosen trajectories without using the reward function. On this regard, our work is closer to the approach of (Gisslen et al., 2011) and (Duell et al., 2012) that are also based on unsupervised learning. In comparison to these last approaches, the originality is to propose a transductive model that directly learns the model of the world in the representation space, allowing us to compute simulations of the future of the system even if no information is observed (we note that (Schmidhuber,1990) could be adaptated to do so in RL applications). This also allows different ways to infer representations on new observations.\\nWe'd like to stress out that the representations learned with our model could also be used for different tasks than RL.\\n\\n==\\n'This is not accurate - the representation of (Gissl en et al., 2011) had fixed size, but in principle the history could have arbitrary depth, because they used a RAAM like Pollack's (NIPS 1989) as unsupervised sequence compressor: J. B. Pollack. Implications of Recursive Distributed Representations. Advances in Neural Information Processing Systems I, NIPS, 527-536, 1989.'\\n\\nYes, thank you for pointing out the lack of clarity of this sentence, it will be modified in the next version.\"}" ] }
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
submitted, no decision
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
ICLR.cc/2014/workshop
2014
{ "note_id": [ "zf2o4C95ov4cF", "o39CmeYuHU3uB", "__fQe8rzQg_zM", "siDqiQjb6wigv", "88XABsMFS5BZk", "XXxADkylAkDl2", "AH9vZzWqgrHV-", "v7XEhIcAFPvAa", "XX5Iws7jGn6gb", "yxJGyrO9Y1LFo" ], "note_type": [ "review", "comment", "review", "review", "comment", "comment", "review", "comment", "review", "review" ], "note_created": [ 1392772020000, 1392772140000, 1391654520000, 1393335480000, 1389354660000, 1392771960000, 1388926500000, 1389354780000, 1389118620000, 1391460360000 ], "note_signatures": [ [ "Jimmy Ba" ], [ "Jimmy Ba" ], [ "anonymous reviewer d691" ], [ "Jost Tobias Springenberg" ], [ "Jimmy Ba" ], [ "Jimmy Ba" ], [ "David Krueger" ], [ "Jimmy Ba" ], [ "Yoshua Bengio" ], [ "anonymous reviewer a881" ] ], "structured_content_str": [ "{\"review\": \"The reviewer says: \\u201cThey conclude that current learning algorithms are a better fit for deeper architectures and that shallow models can benefit from improved optimization techniques.\\u201d We are not really sure of this, but it is a possibility and we are trying to do the experiments necessary to answer this question.\\n\\nThanks for pointing us to related work on re-parameterizing the weight matrices. We added these to the extended abstract. What we propose is somewhat different from this prior work. Specifically, we apply weight factorization during training (as opposed to after training) to speed convergence of the mimic model --- the weights of the linear layer and the weights in the non-linear hidden layer are trained at the same time with backprop.\\n\\nThe SNN-MIMIC models in Table 1 use 250 linear units in the first layer. We updated the paper to include this information.\\n\\nOn page 2, the features are logmel: fourier-based filter banks with 40 coefficients distributed on a mel-scale. We have modified the paper to clarify this.\\n\\nThe ECNN on page 2 is an ensemble of multiple CNNs. Both SNN-MIMIC models (8k and 400k) are trained to mimic the ECNN. We mimic an ensemble of CNNs because we don\\u2019t have any unlabeled data for TIMIT and thus must use the modest-sized train set for compression. With only 1.1M points available for compression, we observe that the student MIMIC model is usually 2-3% less accurate than the teacher model. We also observe, however, that whenever we make the teacher model more accurate, the student MIMIC model gains a similar amount of accuracy as well (suggesting that the fixed gap between the deep teacher and shallow MIMIC models is due to a lack of unlabeled data, not a limited representational power in the shallow models). Because our goal is to train a shallow model of high accuracy, we needed to use a teacher model of maximum accuracy to help overcome this gap between the teacher and mimic not. If we had a large unlabeled data set for TIMIT this would not be necessary. The ensemble of CNNs is significantly more accurate than a single CNN, but we have not yet published that result. We modified the paper to make all of this clearer.\"}", "{\"reply\": \"The reviewer says: \\u201cThey conclude that current learning algorithms are a better fit for deeper architectures and that shallow models can benefit from improved optimization techniques.\\u201d We are not really sure of this, but it is a possibility and we are trying to do the experiments necessary to answer this question.\\n\\nThanks for pointing us to related work on re-parameterizing the weight matrices. We added these to the extended abstract. What we propose is somewhat different from this prior work. Specifically, we apply weight factorization during training (as opposed to after training) to speed convergence of the mimic model --- the weights of the linear layer and the weights in the non-linear hidden layer are trained at the same time with backprop.\\n\\nThe SNN-MIMIC models in Table 1 use 250 linear units in the first layer. We updated the paper to include this information.\\n\\nOn page 2, the features are logmel: fourier-based filter banks with 40 coefficients distributed on a mel-scale. We have modified the paper to clarify this.\\n\\nThe ECNN on page 2 is an ensemble of multiple CNNs. Both SNN-MIMIC models (8k and 400k) are trained to mimic the ECNN. We mimic an ensemble of CNNs because we don\\u2019t have any unlabeled data for TIMIT and thus must use the modest-sized train set for compression. With only 1.1M points available for compression, we observe that the student MIMIC model is usually 2-3% less accurate than the teacher model. We also observe, however, that whenever we make the teacher model more accurate, the student MIMIC model gains a similar amount of accuracy as well (suggesting that the fixed gap between the deep teacher and shallow MIMIC models is due to a lack of unlabeled data, not a limited representational power in the shallow models). Because our goal is to train a shallow model of high accuracy, we needed to use a teacher model of maximum accuracy to help overcome this gap between the teacher and mimic not. If we had a large unlabeled data set for TIMIT this would not be necessary. The ensemble of CNNs is significantly more accurate than a single CNN, but we have not yet published that result. We modified the paper to make all of this clearer.\"}", "{\"title\": \"review of Do Deep Nets Really Need to be Deep?\", \"review\": [\"The authors show that a shallow neural net trained to mimic a deep net (regular or convolutional) can achieve the same performance as the deeper, more complex models on the TIMIT speech recognition task. They conclude that current learning algorithms are a better fit for deeper architectures and that shallow models can benefit from improved optimization techniques. The experimental results also show that shallow models are able to represent the same function as DNNs/CNNs. To my knowledge, training an SNN to mimic a DNN/CNN through model compression has not been explored before and the authors seem to be getting good results at least on the simple TIMIT task. It remains to be seen if their technique scales up to large vocabulary tasks such as Switchboard and Broadcast News transcription. This being said, a few critiques come to mind:\", \"The authors discuss factoring the weight matrix between input and hidden units and present it as being a novel idea. They should be aware of the following papers:\", \"T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy and B. Ramabhadran, 'Low-Rank Matrix Factorization for Deep Neural Network Training with High-Dimensional Output Targets,' in Proc. ICASSP, May 2013.\", \"Jian Xue, Jinyu Li, Yifan Gong, 'Restructuring of Deep Neural Network Acoustic Models with Singular Value Decomposition', in Proc. Interspeech 2013.\", \"It is unclear whether the SNN-MIMIC models from Table 1 use any factoring of the weight matrix. If yes, what is k?\", \"It is unclear what targets were used to train the SNN-MIMIC models: DNN or CNN? I assume CNN but it would be good to specify.\", \"On page 2 the feature extraction for speech appears to be incomplete. Are the features logmel or MFCCs? In either case, the log operation appears to be missing.\", \"On page 2 you claim that Table 1 shows results for 'ECNN' which is undefined.\"]}", "{\"review\": \"Hey, cool Paper!\\nAfter reading through it carefully I however have one issue with it.\\nThe way you present your results in Table 1 seems a bit misleading to me. \\nOn first sight I presumed that the mimic network containing 12M parameters was trained to mimic the DNN of the same size while the large network with 140M connections was trained to mimic the CNN with 13M parameters (as is somewhat suggested in your comparison, i.e. by them achieving similar performance). \\n\\nHowever, as you state in your paper both networks are actually trained to mimic an ensemble of networks with size and performance unknown to the reader. In your response to the reviewers you mention that the mimic network always performs 2-3 % worse than the ensemble. This, to me, suggests that the ensemble performs considerably better than the best CNN you trained. Given that my interpretation is correct the performance of the ensemble should be mentioned in the text and it should be clarified in the table that the mimic networks are trained to mimic this ensemble. Furthermore, assuming a 2 percent gap between the ensemble and the mimic network it is possible that trianing i.e. a three layer network, containing the same number of parameters, could shorten this gap. That is, one could imagine a deeper mimic network to actually perform better than the shallow mimic network (as it is in not nearly perfectly mimicing the ensemble). I think this should be tested and reported alongside your results (if I read your comments to the reviewers correctly you have tried, and succeeded, to train deep networks with fewer parameters to mimic larger ones, strongly hinting that this might be a viable strategy for mimicing the ensemble).\"}", "{\"reply\": \"David, thank you for your comments. We submitted a revised draft on Jan 3 that addressed some of your concerns. We\\u2019re sorry you read the earlier, rougher draft.\\n\\nYou are correct that we are not able to train a shallow net to mimic the CNN model using a similar number of parameters as the CNN model, and the text has been edited to reflect this. We believe that if we had a large (> 100M) unlabelled data set drawn from the same distribution as TIMIT that we would be able to train a shallow model with less than ~15X as many parameters to mimic the CNN with high fidelity, but are unable to test that hypothesis on TIMIT and are now starting experiments on another problem where we will have access to virtually unlimited unlabelled data. But we agree that the number of parameters in the shallow model will not be as small as the number of parameters in the CNN because the weight sharing of the local receptive fields in the CNN allows it to accomplish more with a small number of weights than can be accomplished with one fully-connected hidden layer. Note that the primary argument in the paper, that it is possible to train a shallow neural net (SNN) to be as accurate as a deeper, fully-connected feedforward net (DNN), does not depend on being able to train an SNN to mimic a CNN with the same number of parameters as the CNN. We view the fact that a large SNN can mimic the CNN without the benefit of the convolutional architecture as an interesting, but secondary issue.\\n\\nThank you again for your comments. We agree with everything you said.\"}", "{\"reply\": \"Thank you for the comments. We completely agree that more results are needed to support the conclusions, and this is why we submitted an extended abstract instead of full paper. More experiments are underway, but we don't yet have final results to add to the abstract. Preliminary results suggest that on TIMIT the MIMIC models are not as accurate as the teacher models mainly because we do not have enough unlabeled TIMIT data to capture the function of the teacher models, as opposed to because the MIMIC models have too little capacity or cannot learn a complex function in one layer. Preliminary results also suggest that: 1) the key to making the shallow MIMIC model more accurate is to train it to be more similar to the deep teacher net, and 2) the MIMIC model is better able to learn to mimic the teacher model when trained on logit (the unnormalized log probabilities) than on the softmax outputs from the teacher net. The only reason for including the linear layer between the input and non-linear hidden layer is to make training of the shallow model faster, not to increase accuracy. Experiments suggest that for TIMIT there is little benefit from using more than 250 linear units.\\n\\nWe agree with papers such as Seide, Li, and Yu, that shallow nets perform worse than deep nets given the same # of parameters when trained with the current training algorithms. It is possible that, as Yoshua Bengio suggests, deep models provide a better prior than shallow models for complex learning problems. It is also possible that other training algorithms and regularization methods would allow shallow models to work as well. Or it may be a mix of the two. We believe the question of whether models must be deep to achieve extra accuracy is as yet open, and our experiments on TIMIT provide one data point that suggests it *might* be possible to train shallow models that are as accurate as deeper models on these problems.\\n\\nWe have tried using some of the MIMIC techniques to improve the accuracy of deep models. With the MIMIC techniques we have been able to train deep models with fewer parameters that are as accurate as deep models with more parameters (i.e., reduce the number of weights and number of layers needed in the deep models), but we have not been able to achieve significant increases in accuracy for the deep models. If compression is done well, the mimic model will be as accurate as the teacher model, but usually not more accurate, because the MIMIC process tries to duplicate the function (I/O behavior) learned by the teacher model in the smaller student model.\"}", "{\"review\": \"Interesting paper. My comments:\", \"abstract\": \"'Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model.' - this does not appear to be true for the CNN model.\\n\\n4. last sentence first paragraph is missing a 'to' at the end of the line 'models TO prevent overfitting'\\n\\n6. 'It is challenging to...' sentence needs work\\n\\n7. 'insertion penalty' and 'language model weighting' could use definitions or references. \\n\\nfigure 1 -> table 1\\n\\n7.1 The first claim (also made in the abstract) is not supported by the table for the SNN mimicking the CNN. It appears that ~15x as many parameters were needed to achieve the same level of performance. The last sentence of the first paragraph seems to acknowledge this...\\n\\nThe second paragraph should, I think, be clarified. How are you increasing performance of the deep networks? What experiments did you perform that lead to this conclusion?\\n\\n8. The last sentence does not seem supported to me. Your results as presented only achieve the same level of performance as previous results, and in order to achieve this level of performance, it would be necessary to use their training methods first so that your SNNs have something to mimic, correct?\"}", "{\"reply\": \"Yoshua, thank you for your comments. We believe you may have read an older draft and hope that most or all of the misleading statements were corrected in the Jan 3 draft. Nonetheless, many of your comments still apply to the current paper.\\n\\nWe completely agree that generality would be improved with results on additional datasets. We submitted a workshop abstract instead of full paper because we only had results for one data set, and are about to run experiments on two other datasets.\\n\\nWith TIMIT we did not use more training data to train the shallow models than was used to train the deep models. We used exactly the same 1.1M training cases used to train the DNN and CNN models to train the SNN mimic model. The only difference is that the mimic SNN does not see the original labels. Instead, it sees the real-valued probabilities predicted by the DNN or CNN it is trying to mimic. In general, model compression works best when a large unlabelled data set is available to be labeled by the \\u201csmart\\u201d model so that the smaller mimic model can be trained \\u201chard\\u201d with less chance of overfitting. But for TIMIT unlabelled data was not available so we used the same data used to train the deep models for compression (mimic) training. We believe that the fact that no extra data --- labeled or unlabelled --- was used to train the SNN models helps drive home the point that it may be possible to train shallow models to be as accurate as deep models.\\n\\nWe agree with your comment that \\u201cThe paper makes it sound as if we could find a better way to train shallow nets in order to get results as good as deep nets, as if it was just an optimization issue.\\u201d, except that we view it more perhaps as an issue of regularization than of just optimization. In particular, we agree that depth, when combined with current learning and regularization methods such as dropout, is providing a prior that aids generalization, but are not sure that a similar effect could not be achieved using a different learning algorithm and regularization scheme to train a shallow net on the original data. In some sense we\\u2019re making a black-box argument: we already have a procedure that given a training set, yields a shallow net that has accuracy comparable to a deep fully-connected feedforward net trained on the same data. If we hadn\\u2019t shown you what the learning algorithm was in our black box would you have been 100% sure that the wizard behind the curtain must have been deep learning? The real question is whether the black box *must* go through the intermediate step of training a deep model to mimic, or whether there exist other learning and regularization procedures that could achieve the same result without going through the deep intermediary. We do not (yet) know the answer to this question, but it is interesting that a shallow model can be trained that is as accurate as a deep model without access to any additional data. We certainly agree that it is difficult to train large, shallow nets on the original targets with the learning procedures currently available.\\n\\nWe agree that looking at training errors can be informative, but they might not resolve the issue in this case. If model compression has access to a very large unlabelled data set, if the mimic model has sufficient capacity to represent the deep model, the shallow model will learn to be a high-fidelity mimic of the deep model and will make the same predictions, and the error of the shallow mimic model and deep model on train and test data will be identical as the error of the mimic predictions compared to the deep model is driven to zero. This is for the ideal case where we have access to a very large unlabelled data set, which unfortunately we did not have for TIMIT. Exactly what training errors do you want to see: the error of the DNN on the original training data vs. the error of the SNN trained to mimic the DNN on the real-valued targets, but measured on the original labels of the training points, or vs. the error of an SNN trained on the original data and labels? Early stopping was used when training the deep models, but was not used when training the mimic SNN models. In fact we find it very difficult to make the SNN mimic model overfit when trained with L2 loss on continuous targets.\\n\\nThanks for the pointers to other papers we should have cited. We\\u2019re happy to add them to the abstract. And thanks again for the careful read of our abstract. Sorry you had to struggle through the 1st draft.\"}", "{\"review\": \"This paper asks interesting questions and has interesting experimental results. The generality of the results could be improved by considering more than one dataset, though.\\n\\nYou might want to first fix a typo in Rich's name...\\n\\nI concur with David Krueger regarding the somewhat misleading statements in the abstract and introduction etc regarding the matching of depth with width (and a LOT more training examples), which does not apply in the case of a convolutional net. This really needs to be fixed.\\n\\nMy take on the results is however quite different from the conclusions given in the paper. The paper makes it sound as if we could find a better way to train shallow nets in order to get results as good as deep nets, as if it was just an optimization issue. My interpretation is quite different. The results seem more consistent with the interpretation that the depth (and convolutions) provide a PRIOR that helps GENERALIZING better. This is consistent with the fact that a much wider network is necessary in the convolutional case, and that in both cases you need to complement the shallow net's training set with the fake/mimic examples (derived from observing the outputs of the deep net on unlabeled examples) in order to match the performance of a deep net. I believe that my hypothesis could be disentangled from the one stated in the paper (which seems to say that it is a training or optimization issue) by looking at training error. According to my hypothesis, the shallow net's training error (without the added fake / mimic examples) should not be significantly worse than that of the deep net (at comparable number of parameters). According to the 'training' hypothesis that the authors seem to state, one would expect training error to be measurably lower for deep nets. In fact, for other reasons I would expect the deep net's training error to be worse (this would be consistent with previous results, starting with my paper with Dumitru Erhan et al in JMLR in 2010). \\n\\nIt would be great to report those training errors. Note that to be fair, you have to report training error with no early stopping, continuing training for a fixed and large number of epochs (the same in both cases) with the best learning rate you could find (separately for each type of network).\\n\\nFinally, the fact that even shallow nets (especially wide ones) can be hard to train (see Yann Dauphin's ICLR 2013 workshop-track paper) also weakens the hope that we could get around the difficulty of training deep nets by better training shallow nets.\\n\\nSeveral more papers need to be cited and discussed. Besides my JMLR 2010 paper with Dumitru Erhan et al (Why Does Unsupervised Pre-training Help Deep Learning), another good datapoint regarding the questions raised here is the paper on Understanding Deep Architectures using a Recursive Convolutional Network, by Eigen, Rolfe & LeCun, submitted to this ICLR 2014 conference. Whereas my JMLR paper is about understanding the advantages of depth as a regularizer, this more recent paper tries to tease apart various architectural factors (including depth) influencing performance, especially for convolutional nets.\"}", "{\"title\": \"review of Do Deep Nets Really Need to be Deep?\", \"review\": \"An interesting workshop paper. For such a provocative title, more results are needed to support the conclusions. Part of the resurgent success of neural networks for acoustic modeling is due to making the networks \\u201cdeeper\\u201d with many hidden layers (see F. Seide, G. Li, and D. Yu, 'Conversational Speech Transcription Using Context-Dependent Deep Neural Networks', ICASSP 2011 which shows that shallow networks perform worse than deep for the same # of parameters). This paper provides a different data point where a shallow network is trained using the author\\u2019s \\u201cMIMIC\\u201d technique performs as well as a deep network baseline on the TIMIT phone recognition task. The MIMIC technique involves using unsupervised soft labels from an ensemble of deep nets of unknown size and quality, including a linear layer of unknown size, and training on the un-normalized log prob rather than softmax output. The impact of each of these aspects on their own is not investigated; perhaps a deep neural network would gain from some or all of these MIMIC training steps as well.\"}" ] }
-j1Hj5YWwrj_f
Generic Deep Networks with Wavelet Scattering
[ "Edouard Oyallon", "Stéphane Mallat", "Laurent Sifre" ]
We introduce a two-layer wavelet scattering network, which involves no learning, for object classification. This scattering transform computes a spatial wavelet transform on the first layer and a joint wavelet transform along spatial, angular and scale variables in the second layer. Image classification results are given on Caltech databases.
[ "wavelet", "generic deep networks", "network", "learning", "object classification", "transform", "spatial wavelet transform", "first layer", "joint wavelet transform", "spatial" ]
submitted, no decision
https://openreview.net/pdf?id=-j1Hj5YWwrj_f
https://openreview.net/forum?id=-j1Hj5YWwrj_f
ICLR.cc/2014/workshop
2014
{ "note_id": [ "IIb_NA8kBPNo2", "44UMwEGyamZ3Q", "JzqmzHoBmuCj_", "3fuqf0Yvez3Ry" ], "note_type": [ "review", "review", "comment", "comment" ], "note_created": [ 1391404140000, 1391907000000, 1392860400000, 1392824580000 ], "note_signatures": [ [ "anonymous reviewer 6006" ], [ "anonymous reviewer 06bb" ], [ "Edouard Oyallon" ], [ "Edouard Oyallon" ] ], "structured_content_str": [ "{\"title\": \"review of Generic Deep Networks with Wavelet Scattering\", \"review\": \"* A brief summary of the paper's contributions, in the context of prior work.\\nPaper describes experiment of feature extraction with wavelets applied to CALTACH dataset.\\n\\n* An assessment of novelty and quality. \\nThis is just a single numerical result. It gives some insight, but there is no novelty.\", \"pros\": \"It is good that someone have done experiment on how powerful are wavelets on bigger dataset.\", \"cons\": \"Very little of added value, small amount of content.\"}", "{\"title\": \"review of Generic Deep Networks with Wavelet Scattering\", \"review\": \"I am not very familiar with scattering transform work so I cannot judge of the novelty of using 2 layers of wavelet transforms for classification. However the results are impressive in that they do not use any learning and still beat the best ImageNet-pretrained convolutional network on Caltech 101 when using 1 or 2 layers. It does not however on Caltech 256 and some insight into why that might be would have been nice.\", \"pros\": [\"good results with small number of layers\"], \"cons\": [\"no experiment with more layers, does it degrade drastically beyond 2 layers?\"]}", "{\"reply\": \"Dear reviewer,\\n\\nWe would like to thank you for your helpful comments. \\n\\nTwo layers wavelets transforms had been previously used for MNIST digit and texture classification but never over complex data bases such as CalTech. Concerning the compared performance on CalTech 101 and CalTech 256, we are currently redoing the experiments on Caltech 256 to understand the loss of performance. It looks like it is due to a choice of wavelets which were Haar wavelets along rotations, and seems to have impaired performances. \\n\\nUntil now, we observed that using a third layer with wavelet convolutions does not improve results relatively to two layers (but it does not degrade it either). We believe that beyound the second layer, we will need to learn the deep network filters, but this still needs to be fully checked. These comments are added in the abstract.\"}", "{\"reply\": \"Dear reviewer,\\n\\nWe would like to thank you for your review and will try to clarify some points.\\n\\nCurrently deep networks are very powerful but there is a lack of understanding of the type of processing it implements. The novelty of the paper is to show that to reach 67% there is no need to learn the deep network weights which can be taken to be wavelets along spatial, rotation, and scaling variables. It is the first time that such results hold for a relatively complex image data basis. All filters are separable and thus have a fast implementation. Learning can then be applied to improve\\nthese predefined filters or to add more layers to the network.\\n\\nUnderstanding the processing of deep networks and how to simplify it is currently a very important challenge. It was a surprise for\\nus to see that even complex image data bases such as CalTech can be tackled with predefined wavelet filters, and matching the\\nperformance of the double layer ImageNet filters on CalTech is not an easy task. We believe that the results open the possibility to further important simplifications of these networks and their filters.\"}" ] }
AKIW-22FWrKkR
End-to-End Text Recognition with Hybrid HMM Maxout Models
[ "Ouais Alsharif", "Joelle Pineau" ]
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.
[ "text recognition", "hybrid hmm", "models", "problem", "detecting", "text", "natural scenes", "challenging", "counterpart", "documents" ]
submitted, no decision
https://openreview.net/pdf?id=AKIW-22FWrKkR
https://openreview.net/forum?id=AKIW-22FWrKkR
ICLR.cc/2014/workshop
2014
{ "note_id": [ "zfs12lBVTLwEF", "nn0rTlhW6_n3U", "9Y8Z9zerZ-9rV", "mi6H4jocJNDxd", "oR0WwODNrPSba" ], "note_type": [ "review", "comment", "review", "review", "comment" ], "note_created": [ 1391539560000, 1391807820000, 1391808600000, 1391663100000, 1393982100000 ], "note_signatures": [ [ "anonymous reviewer f488" ], [ "Ouais Alsharif" ], [ "Ouais Alsharif" ], [ "anonymous reviewer de8c" ], [ "anonymous reviewer de8c" ] ], "structured_content_str": [ "{\"title\": \"review of End-to-End Text Recognition with Hybrid HMM Maxout Models\", \"review\": \"The authors present a complete hybrid system for recognition characters and words from a real world natural scenes. The clue idea is to cede word-to character + character classification and segmentations correction into three convolutional neural networks with maxout non-linearity. Word-to character model makes and additional use of HMM to better deal with sequential aspect of character segmentations across the words.\\n\\nI think the community can benefit from this work as it shows some interesting experiments, and what is even more important, does it in the context of the complete system. Experimental aspect is sufficient and explore various sub-problems the potential Reader could be interested in, for example, the use and impact of lexicon and different language models on the final system accuracy. It's also important - the presented solution gives a state of the art accuracy\\n\\nTo summarize, I think the authors put a lot of effort into this work and present some nice experimental results.\", \"suggestions_for_improvements\": [\"I would appreciate an implicit distinction between likelihoods and probabilities, for example, by p and P respectively. That applies to all paper content starting from the equation 1 and including occasional in-text references to certain probabilistic quantities.\", \"(thing to consider) Section 3.4 argmax_Qp(Q|O) - isn't HMM producing argmax_Q p(O|Q) - i.e. the likelihood of state sequence Q producing the observation sequence O? If you agree with this comment, you need also to fix it in the last paragraph of sec 5.1.\"]}", "{\"reply\": \"Thank you for these comments :)\\n\\n- I think the first point is a valid point. We should factor this in.\\n- As for the second point, the HMM actually produces argmax_Q(p(Q|O). through the Viterbi algorithm. This is more elucidated in equation 30 in Rabiner's tutorial: http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf\\n\\nThank you\"}", "{\"review\": \"Thank you for the comments. :)\\n\\nYour comment regarding beam search is correct, this is (almost) a standard beam search. However, we wanted to make it clear that the search is on the cascade. \\n\\nRegarding the Q_i. You are correct, we do not define it clearly, and like you inferred, it is just a priority queue. s_i and v_i are the same thing. We will change this such that they are a single variable.\\n\\nThe 55.6% figure is on the ICDAR dataset. Using a language model and a lexicon does not improve the results beyond a language model. The reason for that is because the language model biases the results of the beam search heavily. When the minimal edit distance is reached for more than one word, we broke ties lexicographically. I think a small interesting area would be to investigate how other kinds of edit distance would help in correcting misspelled words, as this particular point seems to be able to lift test accuracy 2-3% points.\\n\\nCould you please clarify the last comment? I'm not sure how using a hash table would improve the accuracy in such a scenario.\", \"thank_you_for_these_comments\": \") You've been most helpful.\"}", "{\"title\": \"review of End-to-End Text Recognition with Hybrid HMM Maxout Models\", \"review\": \"This paper presents a system for text recognition from natural images that leverages recent advances in deep learning. Contrarily to previous methods that often focus on a single aspect, this work addresses all simpler sub-problems and incorporates classifiers into different sub-modules of the whole system. The resulting method achieves impressive results and seems computationally efficient. The paper is very well written, comprehensive and does a good job at condensing a lot of information into 9 pages.\", \"a_minor_weakness_concerns_the_novelty_aspect\": \"the paper mostly reuses existing algorithms, such as convolutional maxout networks, hybrid HMMs, beam search, MSER. What the authors call 'Cascade Beam Search' is in fact ordinary beam search. However, the pipeline is novel and produces good results and insightful discussion, especially about the trade-offs involved.\\n\\n\\n- Section 5.3 (especially Algorithm 1): I found the notation confusing. Can you define Q_i in words? are those implemented with priority queues?\\nWhat is the difference between intervals s_i and v_i?\\n\\n- Section 5.4: The 55.6% figure is obtained for which dataset? Why not use both a lexicon and a language model?\\nWhat happens when the minimal edit distance is reached for more than one word?\\n\\n- The authors' main movitation for a language model is to achieve 'constant time in lexicon size per query', while the edit distance technique is slower but apparently has a better accuracy. Would it be possible to use hash tables to improve accuracy whenever the minimal edit distance is zero?\"}", "{\"reply\": \"It would make more sense to me to select the most likely word that reaches the minimal edit distance, instead of breaking ties lexicographically.\\n\\nConcerning the last comment, you could use a combination of the two modes by first searching for the most likely word present in the lexicon, and reverting to the language model method in case none of the words were matched. This would allow you to stay within the constant time in lexicon size constraint because merely checking if a word is present in the lexicon is very fast using a hash table.\\n\\nOverall, those are minor points and will not necessarily improve accuracy, but I was curious as to why those strategies were not used.\"}" ] }
Bg3GB1suG0qx6
Learning Information Spread in Content Networks
[ "Cédric Lagnier", "Ludovic Denoyer", "Sylvain Lamprier", "Simon Bourigault", "patrick gallinari" ]
We introduce a model for predicting the diffusion of content information on social media. When propagation is usually modeled on discrete graph structures, we introduce here a continuous diffusion model, where nodes in a diffusion cascade are projected onto a latent space with the property that their proximity in this space reflects the temporal diffusion process. We focus on the task of predicting contaminated users for an initial initial information source and provide preliminary results on differents datasets.
[ "information spread", "content networks", "model", "diffusion", "content information", "social media", "propagation", "discrete graph structures", "continuous diffusion model", "nodes" ]
submitted, no decision
https://openreview.net/pdf?id=Bg3GB1suG0qx6
https://openreview.net/forum?id=Bg3GB1suG0qx6
ICLR.cc/2014/workshop
2014
{ "note_id": [ "bG6SG57HSkEHC", "F0CYlZ-5YhF54", "PkdaYJEeb49CF", "FQHhO9DehTO2w" ], "note_type": [ "review", "review", "review", "comment" ], "note_created": [ 1391090100000, 1392193800000, 1392249960000, 1391526360000 ], "note_signatures": [ [ "anonymous reviewer 9e53" ], [ "anonymous reviewer 5bc5" ], [ "Ludovic Denoyer" ], [ "Ludovic Denoyer" ] ], "structured_content_str": [ "{\"title\": \"review of Learning Information Spread in Content Networks\", \"review\": \"Learning Information Spread\\n\\nThe ms considers the interesting question of diffusion and information spreading in content networks. The modeling is performed by a diffusion kernel with ranking and classification constraints; having both is an innovation. \\n\\nWhile the paper is generally clear, I miss details on how this is exactly done and how this is optimized. The optimization problem seems nontrivial, so it would be nice to know the computational effort.\\nWhile first experiments show encouraging results, it is unclear whether in a simple toy problem the proposed estimation algorithm will find the ground truth consistently. \\nFurthermore no comparison to other models that consider information spread are given, neither in speed/scaling nor accuracy/severity of errors.\\nConcluding, the paper is interesting but somewhat preliminary; it consists in a small increment over Bourigault et al. 2014 and lacks many details.\"}", "{\"title\": \"review of Learning Information Spread in Content Networks\", \"review\": \"This work proposed an extension of the content diffusion kernel model by adding an additional classification constraint.\\nMy main concern of the current version of this work is whether the classification constraint is proper for this task (modeling the spread of information in social network): \\nThe information spread could not only depended on the proximity of users but also on the content of information. In other words, one specific user could not be contaminated by the source in one cascade but rather in other cascades. If that's the case, the classification constraints will not be satisfied and the learning cannot converge;\\nThis classification constraint could also undermine the performance of ranking. It is indeed shown in Table 1. \\n\\nPros\\n-- well-written and organized \\nCons\\n--the proposed classification constraint might not be suitable for modeling the spread of information in social networks.\"}", "{\"review\": \"Dear reviewer,\\n\\n Thank you for the time spent for this review.\\n\\nForgetting the content when modeling information propagation is a classical assumption made in the literrature. Moreover some datasets do not contain content information, or this information is so noisy that it cannot be used for prediction. But you are right, content is important.\\n\\nActually, the paper is a short paper proposing an extension of a previoulsy published model which is able to consider the content information. Due to the limited size of the short papers, we have decided to focus on the 'without content' version of our previous model which obtains reasonnably good performance in comparison to the 'with content' version. But using content information in the proposed approach is not so complicated.\\n\\nTo follow your remark, we will add a small paragraph in the current paper explaining how content information can be taken into account.\\n\\n Thank you\"}", "{\"reply\": \"Dear reviewer,\\n\\n Thank you for your comments.\\n\\nThe lack of details was mainly due to the official paper size limitation (3 pages), but submitting longer papers seem to be possible.\", \"we_have_posted_a_new_version_of_the_paper_on_arixv_that_contains\": [\"the pseudo-code of the SGD algorithm\", \"A paragraph concerning the learning and inference complexity of the model, showing its efficiency w.r.t existing discrete approaches\", \"A comparison with state of the art baselines in the experimental section\", \"Ludovic\"]}" ] }
wJuDwQ-d3XJMj
Unsupervised feature learning by augmenting single images
[ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Thomas Brox" ]
When deep learning is applied to visual object recognition, data augmentation is often used to generate additional training data without extra labeling cost. It helps to reduce overfitting and increase the performance of the algorithm. In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture. To that end we sample a set of random image patches and declare each of them to be a separate single-image surrogate class. We then extend these trivial one-element classes by applying a variety of transformations to the initial 'seed' patches. Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks. We find that this simple feature learning algorithm is surprisingly successful, achieving competitive classification results on several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
[ "data augmentation", "algorithm", "unsupervised feature learning", "single images", "deep learning", "visual object recognition", "additional training data", "extra labeling cost", "performance" ]
submitted, no decision
https://openreview.net/pdf?id=wJuDwQ-d3XJMj
https://openreview.net/forum?id=wJuDwQ-d3XJMj
ICLR.cc/2014/workshop
2014
{ "note_id": [ "fiPAUv7VOTUR5", "TN0nh4UfnshfF", "JJNs1ddmaWfyT", "stMdtDjJlgWU8" ], "note_type": [ "review", "review", "review", "review" ], "note_created": [ 1392727800000, 1392237300000, 1391729340000, 1391695860000 ], "note_signatures": [ [ "Alexey Dosovitskiy" ], [ "Alexey Dosovitskiy" ], [ "anonymous reviewer 536d" ], [ "anonymous reviewer 672b" ] ], "structured_content_str": [ "{\"review\": [\"An updated version of the paper is now available on arXiv. Main changes are:\", \"extended related work, including brief discussion of connection to metric learning\", \"an experiment on classification with random filters: plot in fig. 4, description in the beginning of section 3.2\"]}", "{\"review\": \"We thank the reviewers for the positive feedback and useful comments.\\n\\nIt is certainly true that the paper could include more experiments and comparisons (as both reviewers point out), that is why we submitted it as a short workshop paper. We will include more experiments in follow-up versions of the paper.\\n\\nReviewer 1 (Anonymous 672b) points out that we do not discuss complexity issues and details of the algorithm such as used transformations and the effect of dropout. The complexity is the same as for training convolutional neural networks: in our experiments training usually takes 0.5 to 3 days, depending on the size of the network. We discuss details of the applied transformations in section 2.1. Using dropout is nowadays a standard practice for training deep convolutional neural networks and studies have been published by others, hence we do not analyze its effect in detail (however, preliminary experiments show that the benefits are quite large).\\n\\nReviewer 2 (Anonymous 536d) points out that we do not discuss the connection to metric learning approaches and that our approach is very much similar to those. We thank the reviewer for pointing out this connection and will include a corresponding remark in the paper. \\n\\nHowever, we do not agree that our approach is very similar to the one proposed in [1]. First of all, their algorithm uses label information, while ours does not. Secondly, even if we applied the algorithm from [1] to our surrogate clusters, our discriminative objective is different from the 'spring system' objective in [1]. Our objective yields features which perform well in classification without need to specify parameters such as functions that represent 'attractive' and 'repulsive' forces, as in [1]. Finally, the quality of the features learned by our algorithm is demonstrated by good classification results. On the other hand, the paper [1] does not show any. \\n\\nWe will upload a newer version of the paper, modified according to some of the remarks, later this week.\\n\\nBest regards,\\nAlexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox\\n\\n-----------\", \"references\": \"[1] Raia Hadsell, Sumit Chopra and Yann LeCun: Dimensionality Reduction by Learning an Invariant Mapping, Proc. Computer Vision and Pattern Recognition Conference (CVPR'06), IEEE Press, 2006\"}", "{\"title\": \"review of Unsupervised feature learning by augmenting single images\", \"review\": \"This paper proposes to reduce the unsupervised feature learning problem to a classification problem by: a) sampling patches at random from (unlabeled) images (in the order of several thousands) and b) creating surrogate classification tasks by considering each patch as a class and by generating several other samples by applying transformations (e.g., translation, rotation, scaling, etc.).\\nThe features trained in this manner are used as patch descriptors for to classify images in Caltech 101, CIFAR and STL-10 datasets. The method compares well with other feature learning methods.\\n\\nThe paper reads well and has a clear narrative. The reduction from unsupervised to supervised learning is presented in an intriguing way. On the other hand, this method seems closely related to work in metric learning and it would be nice to have an explicit discussion about this.\\n\\nPros\\n+ clearly written\\n+ simple idea\\n+ empirical analysis demonstrates good results\\n\\nCons\\n- some baseline experiments are missing, namely\\n - compare to random filters (i.e., what\\u2019s the role played by the architecture used)\\n - it would be nice to see a comparison of the accuracy after fine-tuning the whole system\\n- prior reference to work in metric learning (neighborhood component analysis, DrLIM style) is not mentioned. One can cast a similar learning problem: making the features of patches belonging to the same \\u201cclass\\u201d be similar, and making the features of patches belonging to different \\u201cclasses\\u201d be as far as possible. I believe that by using a ranking loss on such triplets would yield similar results. Under this view, the paper would become very much similar to:\\nRaia Hadsell, Sumit Chopra and Yann LeCun: Dimensionality Reduction by Learning an Invariant Mapping, Proc. Computer Vision and Pattern Recognition Conference (CVPR'06), IEEE Press, 2006\\nexcept that the generation of similar and different patches is produced by transformation known in advance. One advantage of these metric learning approaches is that they naturally scale to an \\u201cinfinite\\u201d number of \\u201cclasses\\u201d.\", \"minor_details\": [\"the schedule on the number of classes seems rather hacky\", \"the overfitting hypothesis in sec. 3.2 could be easily be tested.\"]}", "{\"title\": \"review of Unsupervised feature learning by augmenting single images\", \"review\": \"The paper presents an approach for learning the filters of a convolutional NN, for an image classification task, without making use of target labels. The algorithm proceeds in two steps: learning a transformation of the original image and then learning a classifier using this new representation. For the first step, patches are sampled from an image collection, each patch will then correspond to a surrogate class and a classifier will be trained to associate transformed versions of the patches to the corresponding class labels using a convolutional net. In a second step, this net is replicated on whole images leading to a transformed representation of the original image. A linear classifier is then trained using this representation as input and the target labels relative to the image collection. Experiments are performed on different image collections and a comparison with several baselines is provided.\\nThis paper introduces a simple idea for feature learning which seems to work relatively well. The paper could be easily improved or extended in several ways. A natural extension would be to tune the learned filters using the target labels, which would allow a comparison with state of the art supervised techniques. This method might be less expensive for training than some of the alternatives, but the complexity issues are not discussed at all. The choices made for the convolutional net produce very dense codes. This could be discussed and a comparison with alternatives, e.g. larger filter size could be provided.\\nAlso there could be more practical details like what are the combinations of transformations used for the patches, what is the increase provided by the dropout etc.\"}" ] }
cO4ycnpqxKcS9
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ]
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].
[ "image classification models", "convolutional networks", "class score", "image", "class", "deep", "saliency maps deep", "saliency maps", "visualisation", "learnt" ]
submitted, no decision
https://openreview.net/pdf?id=cO4ycnpqxKcS9
https://openreview.net/forum?id=cO4ycnpqxKcS9
ICLR.cc/2014/workshop
2014
{ "note_id": [ "jrhLjNJlLRr45", "jKcYnfEAY_jL-", "XDxVTYb9VtT9N" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1392699660000, 1391887080000, 1390788960000 ], "note_signatures": [ [ "Karen Simonyan" ], [ "anonymous reviewer 9e94" ], [ "anonymous reviewer e565" ] ], "structured_content_str": [ "{\"review\": \"We thank the reviewers for their positive feedback.\", \"r1\": \"I think the saliency map method is quite interesting. In particular, the fact that it can be leveraged to obtain a decent object localizer, which is only partially supervised, seems impressive.\", \"r2\": \"I found the weakly supervised object localization application the most impressive part of the paper.\\n\\nWe also feel that this is one of the most interesting aspects of our contribution. In particular, we were also impressed by the ability of the network to learn object segmentation in a *weakly supervised* setting (and produce object localisations, competitive with strongly supervised conventional object detectors). We believe that our image specific class saliency maps can be used in applications beyond GraphCut segmentation initialisation, and we plan to address them in future work.\"}", "{\"title\": \"review of Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps\", \"review\": \"Deep convolutional neural networks (convnets) have achieved tremendous success lately in large-scale visual recognition. Their popularity has exploded after winning recent high-profile competitions. As more research groups begin to experiment with convnets, there is increasing interest into what is happening *inside* the convnet. A few papers have been published within the last year providing ways of visualizing what units inside the convnet represent, and also visualizing the spatial support of a particular class. This paper presents two methods for visualization of convnets: one, based on an approach by Erhan, simply backpropagates the gradient of the class score with respect to the image pixels to generate class appearance models. The other method, class-saliency maps, visualize spatial support for a class and are also used to perform weakly supervised object localization.\\n\\nThe authors are very clear about their contributions, mainly in producing understandable visualizations. In terms of novelty, the method by which one obtains the class appearance models have been used in the unsupervised learning context by Erhan et al., but this paper is the first to apply the technique to convnets. The method by which to obtain the class saliency maps is intuitive and produces reasonable visualizations. I found the weakly supervised object localization application the most impressive part of the paper. Although it does not perform nearly as well as methods that consider localization part of training, it's promising to see how localization can be learned without bounding boxes.\", \"pros\": \"* Clear, simple\\n* Provides a useful, practical tool for convnet practitioners\\n* The discussion re: Zeiler and Fergus' Deconvolutional net method clears up any misunderstanding among the two methods which do seem pretty similar\\n* Evaluated on large-scale data (ILSVRC-2013)\\n\\nCons\\n* Though the similarity to the Deconvolutional net method is acknowledged, the technical contribution of this work is not a massive departure from the other work (though suitable for workshop track)\\n\\nOverall, I think this is a good workshop paper. The methodology does not depart far from previous work but the weakly supervised localization is interesting and will generate interest at the conference.\\n\\nComments\\n========\\n\\nFurther insight/discussion on the implication of the change in treatment between the Deconvolutional net method and the proposed method is suggested.\"}", "{\"title\": \"review of Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps\", \"review\": \"This paper presents methods for visualizing the behaviour of an object recognition convolutional neural network. The first method generates a 'canonical image' for a given class that the network can recognize. The second generates a saliency map for a given input image and specified class, that illustrates the part of the image (pixels) that influence the most the given class's output probability. This can be used to seed a graphcut segmentation and localize objects of that class in the input image. Finally, a connection between the saliency map method and the work of Zeiler and Fergus on using deconvolutions to visualize deep networks is established.\\n\\nWhile I'm not impressed by the per-class canonical image generation (which isn't very original anyways, given the work of Erhan et al.), I think the saliency map method is quite interesting. In particular, the fact that it can be leveraged to obtain a decent object localizer, which is only partially supervised, seems impressive. This is probably the most interesting part of the paper. As for the connection with deconvolution, I think it's also a nice observation.\\n\\nAs for the cons of this paper, they are those you expect from a workshop paper, i.e. the experimental work could be stronger. Specifically, I feel like there is a lack of quantitative comparisons. I wonder whether other alternatives to the graphcut initialization could have served as baselines with which to compare quantitatively (but this isn't my expertise, so perhaps there aren't...). The fact that one of their previous systems (which was fully supervised for localization) actually performs worse than this partially supervised system is certainly impressive however!\"}" ] }
0yNguO_G2aycf
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
[ "Takashi Shinozaki", "Yasushi Naruse" ]
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer and enables robust and simultaneous leaning on multilayer neural network.
[ "feedforward supervisory signal", "competitive learning", "multilayered networks competitive", "multilayered networks", "novel", "multilayered neural networks", "supervisory signal", "classification", "new input", "input" ]
submitted, no decision
https://openreview.net/pdf?id=0yNguO_G2aycf
https://openreview.net/forum?id=0yNguO_G2aycf
ICLR.cc/2014/workshop
2014
{ "note_id": [ "Zufc7LsNO-Z3U", "VN79AqkrcIprJ", "lxPke1W3RDxQ6", "mmC3yAGS3zZrd", "Q-z4DUn8LpDam" ], "note_type": [ "review", "comment", "review", "review", "review" ], "note_created": [ 1391915280000, 1392677700000, 1392677820000, 1392969780000, 1391842740000 ], "note_signatures": [ [ "anonymous reviewer fd8d" ], [ "Shinozaki Takashi" ], [ "Shinozaki Takashi" ], [ "Shinozaki Takashi" ], [ "anonymous reviewer 9d3c" ] ], "structured_content_str": [ "{\"title\": \"review of Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks\", \"review\": \"The paper proposes a modified learning rule for competitive learning in multilayer neural networks. The proposed learning algorithm is not clearly explained, in particular the meaning of the variables x_adv and x_target. The variable x_adv is defined to be the 'advance input', but I do not understand what advance input exactly is. Apart from these clarity issues, the method seems relatively elaborated with a pretraining stage and an architecture consisting of a hierarchy of self-organizing maps.\\n\\nIn the conclusion, the authors claim that the method improves the accuracy of the classification task. However, it is unclear which improvement, as the reported accuracy of approximately 90% on MNIST is lower than previously published results. No further details are given on the exact setting of the experiment.\"}", "{\"reply\": \"Dear reviewer,\\nThank you for your comments. \\n\\nWe really apologize to the insufficient description of the learning method. The proposed learning method does not use the label data directly, uses an input which generates the required label as the supervisor signal instead. So, both x_target (as input signal) and x_adv (as supervisory signal) are input vectors (for example, those are 28x28 grayscale image data in the first layer). \\n\\nThe implication of the proposed learning rule described by Eqs.1 & 2 is mainly based on SOM & LVQ algorithm, and extended with the advance supervisory signal x_adv. If x_adv is considered as the input which represents the idealized answer, (x_adv \\u2013 x_target) represents the correction of the weight update direction. Thus, the overall weight update direction with a learning coefficient \\beta is described as follows: (x_target + \\beta (x_adv \\u2013 x_target) = \\beta x_adv + (1 - \\beta) x_target). It is located at the center part of Eq.2 (Eq.4 in the revised version), and corresponds to a gradient of the weight in backpropagation learning rule. The proposed learning rule extracts a gradient-like learning information from the feedforward signal. We extensively rewrote the learning method section in the revised version of the manuscript. \\n\\nThe motivation of the proposed learning method is to develop a new supervised learning method which uses more feedforward oriented mechanism. The feedforward network condenses the input information through the feedforward process, meaning less amount of information in later layers. However, the back propagation learning algorithm uses the error information in the last layer for the learning of the whole network. The proposed learning method focuses to use rich input information in the early layer for the supervisory signal at the layer. We speculate that the proposed method could be extended to multiple for multiple advance inputs (for example, use 'red' and 'round shape' to learn 'apple'). We added a description about the motivation in the 'Conclusion' section.\\n\\nWe also added the comparison with some of previous reports in the revised version for the clarity. Unfortunately, the proposed method is still in a primitive level, and does not have enough performance to compare with many previous reports with great results. We are currently trying to improve the performance of the proposed learning method. We added a result with more training set iterations in Fig.1(c) with 6.9 % error rate after 20 training set iterations. Since it is a rough result, we will update it later. \\n\\nMoreover, we reordered the structure of the manuscript as you suggested. The sections of 'Network structure' and 'Pre-training' are now located at just after introduction, following 'Learning method' section. \\n\\nWe have uploaded the revised version of the manuscript on arXiv.\"}", "{\"review\": \"Dear reviewer,\\nThank you for your comments. \\n\\nThe \\u201cadvance input\\u201d x_adv is the feedforward supervisory input, and is processed as same way but just before the target input. The \\u201cadvance input\\u201d produces required label output, and leave processed values as an aftereffect in the network. The \\u201ctarget input\\u201d x_target is processed with the decayed after effect. The key part of Eq.2 (Eq.4 in the revised version) is (\\beta x_adv + (1 - \\beta) x_target), which exhibits traditional competitive learning algorithm for the sum of two inputs (x_adv and x_target) with a proportion coefficient \\beta. Therefore, the proposed method does not use the label directly, but uses a typical input for the label as the supervisory signal. We extensively rewrote the learning method section.\\n\\nAs you mentioned, our results has no clear improvement from many previous reports. We use the word \\u201cimprovement\\u201d for the reduction of the error rate from the pre-training result. We removed the misleading sentence, and rewrote the first paragraph in the 'Conclusion' section. \\n\\nMoreover, we added a little bit improved result (up to 6.9% error rate) of the proposed method although the data has just one sample. We have also got a better result (3.8 % error rate) with different parameters of the network structure. We will update those results later. \\n\\nWe have uploaded a revised version of the manuscript on arXiv.\"}", "{\"review\": \"We have uploaded revised version (unfortunately, the replacement requires a little more time). The network parameters were changed, and the error rate now slightly improved from the previous version.\\nWe are really apologize for the delayed update.\"}", "{\"title\": \"review of Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks\", \"review\": \"This paper proposed a new learning algorithm for feedforward neural networks that is motivated by Self Organizing Maps (SOM) and Learning Vector Quantization (LVQ). The paper proposes a way for unsupervised pre-training of network weights followed by a supervised fine-tuning.\\n\\nThe paper lacks discussions of clear motivation behind the work, i.e. shortcomings of existing training methods and how the proposed method overcome them. The description of the proposed method still needs a lot of work. More details are needed to clarify the proposed system. For example, in equation (1), d and sigma are not defined. For x_adv and x_target, which one is the input and which one is the label. What are the motivations for the update rule in equation (1) and equation (2)?. \\nPre-training and SOM are described for the first time in the experiment section while they should appear in earlier sections of the paper. The authors only experiment with MNIST and don\\u2019t compare their systems to other good baseline systems on MNIST.\"}" ] }
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
submitted, no decision
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
ICLR.cc/2014/workshop
2014
{ "note_id": [ "JJbRoZKlW6fQt", "m4KEFoJGxcmDU", "vQUSuhGp0Mvja", "cBv9AZMeK_O3x", "JxHIWtr0U5xjb", "80DMXtygJV8Tm", "OOTVKwQ9I7HkZ", "CC39DxlVuyfI0", "-xjY-GMQtwuQN", "kk4_Fauz_DkzE", "R1DX12tB5P1bg", "BUh4cSvQWDBQi", "EnLfESm5kXnRD" ], "note_type": [ "review", "review", "review", "review", "comment", "comment", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1390037040000, 1390867200000, 1390283940000, 1390867200000, 1391228280000, 1391657880000, 1391646480000, 1392065340000, 1391821200000, 1390287000000, 1390286940000, 1390867200000, 1392928680000 ], "note_signatures": [ [ "Daniel Povey" ], [ "Marc'Aurelio Ranzato" ], [ "Thomas Paine" ], [ "Marc'Aurelio Ranzato" ], [ "Thomas Paine" ], [ "Thomas Paine" ], [ "Liangliang Cao" ], [ "anonymous reviewer 6693" ], [ "anonymous reviewer 4f82" ], [ "Thomas Paine" ], [ "Thomas Paine" ], [ "Marc'Aurelio Ranzato" ], [ "Thomas Paine" ] ], "structured_content_str": [ "{\"review\": \"It would be helpful if you clarify the meaning of the x-axis 'minibatches' in your plots. It's not clear whether, in experiments with N GPUs, you are processing N times as many data points per minibatch. In earlier graphs, I assumed no but in later graphs it looked like the other way around.\"}", "{\"review\": \"In general, I think it would make more sense to report test and training errors (y-axis) versus time (x-axis). This is what we are interested in when we try to speed up convergence, not how many weight updates or samples we process. Since all your experiments use the same kind of GPU, the comparison is fair.\", \"questions\": \"a) have you tried to synchronize even more frequently (n_sync=1/10/50)?\\nb) is every node on a different server? If not, do you leverage the fact that communication can be less costly when boards are on the same server?\\n\\nThank you.\"}", "{\"review\": \"Hello Daniel,\\nYes. We did not state this explicitly, but in our plot, we are plotting the training error for one client in our ASGD system. And on average the overall ASGD system sees N times as many data points per minibatch.\\n\\nWe plotted our error vs minibatches instead of time because time is very dependent on the GPU used to perform training, e.g. using Titan cards instead of a Tesla K20Xs can significantly shorten training time.\"}", "{\"review\": \"In general, I think it would make more sense to report test and training errors (y-axis) versus time (x-axis). This is what we are interested in when we try to speed up convergence, not how many weight updates or samples we process. Since all your experiments use the same kind of GPU, the comparison is fair.\", \"questions\": \"a) have you tried to synchronize even more frequently (n_sync=1/10/50)?\\nb) is every node on a different server? If not, do you leverage the fact that communication can be less costly when boards are on the same server?\\n\\nThank you.\"}", "{\"reply\": \"Hi Marc,\\nThanks for reading the paper.\\n\\nYes, I think the plots you suggest make sense. We thought our minibatch measure was sensible, but we want our plots to be as clear and useful as possible so we will change them for the final version of the paper.\\n\\na) At the time of publication we didn't try more frequent updates, but we have been trying those recently.\\n\\nb) On Bluewaters, every server has 1 GPU node. So we weren't able to leverage same board communication, but that would be great.\"}", "{\"reply\": \"Hi Liangliang,\\nThanks for reading. And yes, during submission, the fields auto-populated in the wrong order. Prof Huang is the last author. I am the first. Sorry for the confusion.\\n\\nAll the figures plot the error on training minibatches of 128 images. The plots show the error for minibatches on one client. Plots are comparable across clients.\\n\\nDue to time constraints we didn't change learning rates on these later experiments. But instead focused on initial training speed increases. \\n\\nWe also measured on a validation set, and found that for these settings we see similar gains in validation set performance.\"}", "{\"review\": \"Interesting work. And it is also amusing to see the authorlist on this page. There may be a typo but from my understanding of the authors I believe the first author (Prof. Huang) did all the GPU programming and reported to the last author (Thomas Paine).\\n\\nOne thing confuses me is how did you measure the training error in Figure 2-4. Are these numbers from the whole training set (1.2M) or a batch? Did you change learning rate? Or measure on the validation set?\", \"another_confuse_which_is_totally_my_fault\": \"at the beginning I thought A-SGD stands for Average SGD!\"}", "{\"title\": \"review of GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training\", \"review\": \"Multi-computer GPU training of large networks is an important current topic for representation learning in industry-scale convnets. This paper describes ongoing efforts to combine model parallelism and data parallelism to reduce training time on the ILSVRC 2012 data set.\", \"pro\": [\"they achieve several-fold reductions in runtime over Khrizhevsky's landmark implementation\"], \"con\": [\"I'm not sure that there is significant novelty in their approach, relative to dist-Belief and existing work on asynchronous-SGD.\"]}", "{\"title\": \"review of GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training\", \"review\": \"Summary\\n------------\\n\\nThe paper explores running A-SGD as an approach for speeding up learning.\\nOverall I think these are very interesting and informative results. Specifically for a workshop paper I believe the paper contains enough novelty and empirical exploration.\", \"comments\": \"--------------\\n\\nIt would be interesting to try to quantify how much the size of the model influences these results. In particular I'm wondering of how the performance drops with the size of the gradients that need to be send over the network. \\n\\nAnother interesting plot will be to look at the size of the minibatch and how that influence convergence.\\n\\nI hypothesis that distributed algorithms where the parallelism is made over the data (rather than model), like it is done here at the node level, will benefit a lot more from complicated optimization techniques rather that SGD (even in its asynchronous version). It feels to me that with large models there is a high price to pay for sending the gradients over the network (case an point, n_sync is usually set to something higher than 1). We want to use an algorithm for which each step is itself expensive (and hence we have to send fewer gradients over the network) but that needs much less steps to converge. You can make each step mSGD arbitrarily expensive by increasing the minibatch size, though SGD is fairly inefficient at utilizing these large minibatches.\\nI believe that distributing computation along data for deep models makes a lot more sense with algorithms such as second order methods or variants of natural gradient.\"}", "{\"review\": \"Hello reviewers,\", \"we_would_like_to_bring_your_attention_to_a_similar_paper_submitted_to_this_iclr_workshop_track\": \"\", \"title\": \"Multi-GPU Training of ConvNets\", \"link\": \"http://openreview.net/document/bbc93764-4f15-4ba5-b092-86dc80b727c7#bbc93764-4f15-4ba5-b092-86dc80b727c7\\n\\nBoth papers explore using many GPUs for training convnets in using an ASGD framework. \\n\\nIn theirs, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD).\\n\\nIn ours a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use.\\n\\nOurs work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs.\\n\\nWe bring this up because one of their reviewers has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing.\\n\\nBest,\\nTom\"}", "{\"review\": \"Hello reviewers,\", \"we_would_like_to_bring_your_attention_to_a_similar_paper_submitted_to_this_iclr_workshop_track\": \"\", \"title\": \"Multi-GPU Training of ConvNets\", \"link\": \"http://openreview.net/document/bbc93764-4f15-4ba5-b092-86dc80b727c7#bbc93764-4f15-4ba5-b092-86dc80b727c7\\n\\nBoth papers explore using many GPUs for training convnets in using an ASGD framework. \\n\\nIn theirs, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD).\\n\\nIn ours a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use.\\n\\nOurs work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs.\\n\\nWe bring this up because one of their reviewers has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing.\\n\\nBest,\\nTom\"}", "{\"review\": \"In general, I think it would make more sense to report test and training errors (y-axis) versus time (x-axis). This is what we are interested in when we try to speed up convergence, not how many weight updates or samples we process. Since all your experiments use the same kind of GPU, the comparison is fair.\", \"questions\": \"a) have you tried to synchronize even more frequently (n_sync=1/10/50)?\\nb) is every node on a different server? If not, do you leverage the fact that communication can be less costly when boards are on the same server?\\n\\nThank you.\"}", "{\"review\": \"We would like to thank the reviews for their comments.\", \"to_anonymous_4f82\": \"Thank you for the comments. All your points are good ones. Exploring the effect of model size, and minibatch size vs performance is important. We will look into this for future work. We also agree that second order methods could be a great help here.\", \"to_anonymous_6693\": \"We agree that our work builds directly on recent developments in high performance neural network training.\\n\\nWe would like to emphasize our contribution is exploring the benefits of combining these approaches, and making the results available to the community. To date no group has published results on GPUs, and distributed computing with neural networks of this scale. And we think overall this is a very promising direction.\\n\\nThank you.\"}" ] }
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
submitted, no decision
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
ICLR.cc/2014/workshop
2014
{ "note_id": [ "ttigSsrO9_tD7", "lcBTcK8gLGc6X", "PP0zWn2N3DPOl", "kP1cPmbAG4825", "XKgURjkon2Xoa", "LSHWxogcsnSVZ", "nh6qniypdC210" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1390861140000, 1391824620000, 1391824440000, 1391824500000, 1391824500000, 1391824620000, 1391824620000 ], "note_signatures": [ [ "anonymous reviewer 6331" ], [ "anonymous reviewer 1704" ], [ "anonymous reviewer 1704" ], [ "anonymous reviewer 1704" ], [ "anonymous reviewer 1704" ], [ "anonymous reviewer 1704" ], [ "anonymous reviewer 1704" ] ], "structured_content_str": [ "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"The paper presents a cascaded architecture that successively transforms an input representation into a sparse code, and then transforms the sparse code into a compact dense representation, making sure adjacent patches get similar dense representations. The resulting representations are passed through a spatial pyramid pooling mechanism and concatenated, and then fed to a linear SVM to learn to classify images. This architecture achieves better or similar results to other sparse coding approaches on 3 small image datasets. The paper reads quite well (the first introductory sections are really nice summary of past work in the sparse coding and dimensionality reduction domains). I like the idea of making sure that two sparse codes encoding overlapping regions should have a similar dense code. That said, I wonder why we need to use intermediate representations as input to the final SVM, and not just the last layer. I found section 3.3 less interesting as it was an obvious result (to me at least), and section 3.4 daunting as it meant two more hyper-parameters to tune in our lives... Overall, I liked the paper and would have liked to see results on one larger image dataset: the caltech-xxx datasets are quite outdated, and I'm worried results would not scale to larger and more recent datasets.\"}", "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks.\\n\\nThe paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well.\\n\\nExperiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results.\\n\\nThere are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places.\\n\\nOverall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.\"}", "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"Unsup feature learning by deep sparse coding\\n\\nThis paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks.\\n\\nThe paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well.\\n\\nExperiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results.\\n\\nThere are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places.\\n\\nOverall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.\"}", "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"Unsup feature learning by deep sparse coding\\n\\nThis paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks.\\n\\nThe paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well.\\n\\nExperiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results.\\n\\nThere are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places.\\n\\nOverall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.\"}", "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"Unsup feature learning by deep sparse coding\\n\\nThis paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks.\\n\\nThe paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well.\\n\\nExperiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results.\\n\\nThere are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places.\\n\\nOverall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.\"}", "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks.\\n\\nThe paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well.\\n\\nExperiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results.\\n\\nThere are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places.\\n\\nOverall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.\"}", "{\"title\": \"review of Unsupervised Feature Learning by Deep Sparse Coding\", \"review\": \"This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks.\\n\\nThe paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well.\\n\\nExperiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results.\\n\\nThere are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places.\\n\\nOverall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.\"}" ] }
Hy_7-edzrEHx9
Relaxations for inference in restricted Boltzmann machines
[ "Sida I. Wang", "Roy Frostig", "Percy Liang", "Christopher D. Manning" ]
We propose a relaxation-based approximate inference algorithm that samples near-MAP configurations of a binary pairwise Markov random field. We experiment on MAP inference tasks in several restricted Boltzmann machines. We also use our underlying sampler to estimate the log-partition function of restricted Boltzmann machines and compare against other sampling-based methods.
[ "inference", "restricted boltzmann machines", "relaxations", "approximate inference algorithm", "configurations", "map inference tasks", "sampler" ]
submitted, no decision
https://openreview.net/pdf?id=Hy_7-edzrEHx9
https://openreview.net/forum?id=Hy_7-edzrEHx9
ICLR.cc/2014/workshop
2014
{ "note_id": [ "mL6rEPWYehLYQ", "eeNW4HiDfE4DT", "JM28f-ItaBfNF" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1392869340000, 1391848560000, 1391904120000 ], "note_signatures": [ [ "Sida Wang" ], [ "anonymous reviewer caba" ], [ "anonymous reviewer e306" ] ], "structured_content_str": [ "{\"review\": \"My reviewer response.\\n\\n\\nWe thank the reviewers for the comments and questions.\", \"issues_on_using_rrr_to_estimating_the_partition_function\": \"we would agree with both reviewers that rrr is not necessarily good at estimating partition functions, unless the partition function is dominated by the MAP states. In the bipartite RBM case, this requirement is less restrictive since the partition function only needs to be dominated by a MAP visible state summing over hidden, or a MAP hidden state summing over visible units. We thought it is appealing to try rrr for this task since rrr gives us a distribution as well. However, we would only recommend its use for partition function estimation in these specific cases. This should be clarified further in a revision.\\n\\n***Reviewer 1 questions***\\n> Figure 1... Is there any particular reason for this? Also, what would the result be if the Gibbs sampling were performed at different temperatures?\\n\\nOne hypothesis is that the RBM with learned weights has negative weights in expectation. So any random deviation from the optimum tend to have worse likelihood, causing the more spread distribution. In the random case, the mean weight is 0, and random derivations only cause variance. However, we do not understand the rounding distribution very well. It could be a hard problem since complexity theory rule out strong generic lower bounds on the variance of the rounding distribution.\\nAt lower temperatures, Gibbs tend to sample more near the MAP, provided that it still mixes. This is why we would compare to annealed Gibbs in our MAP finding exercise.\\n\\n\\n> Since the optimization problem itself is non-convex, it would seem that the results would also depend on the quality of the solution to the optimization problem. Is there much variance between optimization runs? What about the effect of the rank of X?\\nWe'd like to note that the full SDP is convex, and can be solved for problems of this size (and we've tried that as well, with no performance difference with local low rank solution). There is much literature with theoretical and empirical evidence supporting that low rank solutions here are suitable and stable. Empirically there is very little variance between runs and using higher rank quickly lead to diminished returns. \\n\\n> Compare to Gurobi on small instances, graph-cut cases\\nrrr usually solve small instances exactly as well. This comparison is a helpful one which we neglected here. Comparing to exact methods in the sub-modular case can also be very helpful, which we also neglected. Thanks for the suggestion.\\n\\n\\n***Reviewer 2 questions***\\n\\n>It would be very interesting to try and estimate the partition function of an RBM with many more modes, and to compare it with other methods (such as AIS).\\nIn some later work, we tried DBM trained on MNIST. Can the reviewer point us to the example with many more modes? We compared to AIS ourselves, the results is that with enough time budget, AIS does better and there exists time budgets under which rrr does better. However, rrr fundamentally does not give an unbiased estimate of the log partition and this comparison was omitted.\\n\\n\\n> The approximation of (14) can be quite bad if p_X is very different from the RBM's distribution:\\nindeed, and unlike actual, asymptotically unbiased, methods of estimating the partition function. rrr is fundamentally a MAP finding method that can only heuristically estimate the partition function. As figure 1 shows, samples from p_X can indeed be very different.\\n\\n> other energy based models, future works?\\nIn current work, we tried rrr for MRF inference, among them DBNs and we gave some theoretical analysis. Thanks for the contrastive backprop suggestion.\"}", "{\"title\": \"review of Relaxations for inference in restricted Boltzmann machines\", \"review\": \"This paper introduces an approach to finding near-MAP solutions in binary Markov random fields. The proposed technique is based on an SDP relaxation that is re-parameterized and solved using constrained gradient-based methods. The final step involves projecting the solution using a random unit-length vector and then rounding the resulting entries to the vertices of a hypercube. This stochastic process defines a sampler that empirically produces lower-energy configurations than Gibbs sampling. The method is simple and seems to perform well for approximate MAP estimation, although it is not clear whether this approach will be useful for estimating the partition function.\\n\\nI liked the result in Figure 1, although the entropy of the rrr-MAP method is much higher in the learned RBM than the one with random weights. Is there any particular reason for this? Also, what would the result be if the Gibbs sampling were performed at different temperatures?\\n\\nSince the optimization problem itself is non-convex, it would seem that the results would also depend on the quality of the solution to the optimization problem. Is there much variance between optimization runs? What about the effect of the rank of X?\\n\\nI think that the results should also be compared on a small RBM where the exact MAP solution can be found in a reasonable amount of time by Gurobi.\\n\\nPerhaps a good test would be on RBMs (or general binary MRFs) with non-negative edge weights. These are submodular and can therefore be globally optimized efficiently. This would serve as a good basis for comparison to other local-search methods.\"}", "{\"title\": \"review of Relaxations for inference in restricted Boltzmann machines\", \"review\": \"The paper introduces a gradient procedure for map estimation in MRFs and RBMs, which can also be used to draw approximate samples and therefore estimate partition functions.\", \"pros\": \"the method is very novel in the context of RBMs, and it seems to work quite well, beating Gibbs-sampling almost every time. It is also useful for estimating partition functions.\", \"cons\": \"the method was able to correctly estimate the partition function of an MNIST RBM well. But MNIST has few modes. It would be very interesting to try and estimate the partition function of an RBM with many more modes, and to compare it with other methods (such as AIS).\\n\\nThe approximation of (14) can be quite bad if p_X is very different from the RBM's distribution. \\n\\nI wonder if this method can be applied to general energy-based models, like the ones used in contrastive backpropagation.\"}" ] }
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
submitted, no decision
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
ICLR.cc/2014/workshop
2014
{ "note_id": [ "PPkvPCYirqPUb", "22eX5RjNOqpg1", "n_6j_UpOmw_Od", "FgxqTOu1qBF1I", "oV4tZMH-QOols", "A50KACiEo4AE6", "YlXrYpUzr3h7n", "guIXuVCQXMuQh", "3fpu-K60iD30c" ], "note_type": [ "review", "review", "review", "review", "review", "review", "review", "review", "review" ], "note_created": [ 1391030520000, 1389837600000, 1390287120000, 1391638500000, 1390287360000, 1391647980000, 1390287240000, 1392782340000, 1390287120000 ], "note_signatures": [ [ "Marc'Aurelio Ranzato" ], [ "anonymous reviewer 3960" ], [ "Thomas Paine" ], [ "anonymous reviewer 95e3" ], [ "Thomas Paine" ], [ "Liangliang Cao" ], [ "Thomas Paine" ], [ "Marc'Aurelio Ranzato" ], [ "Thomas Paine" ] ], "structured_content_str": [ "{\"review\": \"Thank you, Tom.\\n\\nThe main difference between this work and yours is that our data parallelism framework is synchronous (i.e., we use SGD not A-SGD).\\nAlso, all our experiments refer to a set up where all the GPU boards reside in the same server. \\n\\nIn the future, we will extend this work to multiple servers and A-SGD.\"}", "{\"title\": \"review of Multi-GPU Training of ConvNets\", \"review\": \"The paper is about various ways of training convolutional neural networks (CNNs)\\nusing multiple GPUs attached to the same machine.\\n\\nI think it is sufficiently interesting for the conference track. The authors\\nmay not be aware of all relevant prior work, but they can fix this easily. I\\nthink the paper should definitely be accepted because Facebook is growing in\\nthis area right now, and conference-goers will be wanting to talk to the\\npresenters about what's going on there and what opportunities there are.\\n\\nThere are a couple of papers I think the authors should\\tbe aware of; the titles\\tare\\n\\n'Asynchronous stochastic gradient descent for DNN training'\\n'Pipelined Back-Propagation for Context-Dependent Deep Neural Networks'\\n\\nAlso I know that Andrew Ng's group was doing some work on model parallelism for CNNs. Andrew Maas (Andrew Maas <[email protected]>) would be able to tell you who it was and forward any relevant presentations.\"}", "{\"review\": \"Hello reviewers,\", \"we_would_like_to_bring_your_attention_to_a_similar_paper_my_colleagues_and_i_submitted_to_this_iclr_workshop_track\": \"\", \"title\": \"GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training\", \"link\": \"http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667\\n\\nBoth papers explore using many GPUs for training convnets in using an ASGD framework. \\n\\nIn this paper, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD).\\n\\nIn ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use.\\n\\nOurs work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs.\\n\\nWe bring this up because a reviewer has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing.\\n\\nBest,\\nTom\"}", "{\"title\": \"review of Multi-GPU Training of ConvNets\", \"review\": \"Problem is clearly important, but paper is light on details, data sets, which gpu's, etc. All such things matter when judging the speed-up. For example, if you used an older gpu, it's easier to get a speed-up because the trade-off between the gain of multiple gpu's vs. the communication overhead is clearly different.\\n\\nCNN's for audio processing was done in 2012 by Abdel-Hamid. I would recommend to include this reference:\\nAbdel-Hamid, Ossama, et al. 'Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition.' Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, 2012.\", \"multi_gpu_architectures_for_non_convolutional_networks_were_discussed_in\": \"Xie Chen, Adam Eversole, Gang Li, Dong Yu, and Frank Seide, Pipelined Back-Propagation for Context-Dependent Deep Neural Networks, in Interspeech, ISCA, September 2012\\n\\nI don't really see things that are new. Model and Data parallelization was tried in Chen'2012, and the extension for CNN's are obvious. Also, which layer to parallelize depends really on the network structure. For example, if you have a very large output layer with 128k nodes, you might be better off parallelizing the output layer.\"}", "{\"review\": \"Hello everyone,\", \"we_would_like_to_bring_your_attention_to_a_similar_paper_my_colleagues_and_i_submitted_to_this_iclr_workshop_track\": \"\", \"title\": \"GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training\", \"link\": \"http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667\\n\\nBoth papers explore using many GPUs for training convnets in using an ASGD framework. \\n\\nIn this paper, they use 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD).\\n\\nIn ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use.\\n\\nOurs work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs.\\n\\nBest,\\nTom\"}", "{\"review\": \"This is a light but interesting paper. I guess we are seeing a 'baby' version of Facebook's deep learning infostructure.\", \"first_an_easy_to_fix_point\": \"I didn't find explicitly how many layers are there in the deep NN, and which dataset is used. I guess the answers are 7 layers and ImageNet'12?\", \"currently_the_results_are_very_reasonable\": \"2-GPU version is 1.6 times faster than 1-GPU. But I guess the audience is more interesting in the performance with more GPUs. Could 20 GPU be 16 times faster than 1GPU? What if 50, or 100GPUs?\\n\\nScalability may also bring interesting insights in the model design. By the use of model parallelism, I wonder whether we can build an larger CNN with more neurals. It may have more convolutional filters in each layer, and could process larger image like 1024 * 1024 * 3. I wonder whether ensemble learning as well as sparse models will be useful in such a big neural network. \\n\\nHope to see more updates following the current submission.\"}", "{\"review\": \"Hello authors,\", \"i_would_like_to_bring_your_attention_to_a_similar_paper_my_colleagues_and_i_submitted_to_this_iclr_workshop_track\": \"\", \"title\": \"GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training\", \"link\": \"http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667\\n\\nBoth papers explore using many GPUs for training convnets in using an ASGD framework. \\n\\nIn your paper, you use 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD).\\n\\nIn ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization you use.\\n\\nOurs work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs.\\n\\nBest,\\nTom\"}", "{\"review\": \"We thank the reviewers for their comments and suggestions.\\n\\nIn this abstract, we limit the investigation to:\\n+ the use of multiple GPUs all residing in the same server\\n+ the architecture and the task as defined in Krizhevsky et al. NIPS 2012\\n+ the use of regular synchronous stochastic gradient descent.\\nWe clarified this in the revised version of the paper.\\n\\nWe have also added references to prior work as recommended. However, notice the following major differences:\\n\\u2014 Krizhevsky et al. and Coates et al. only considered model parallelism\\n\\u2014 Chen et al, Dean et al. and Zhang et al. used different variants of asynchronous SGD\\n\\nThe objective of this study is to determine the speed up of a popular model using the most straightforward parallelization techniques without changing the optimization method. This should serve as a baseline comparison for any more advanced parallelization method. \\n\\nIt will be avenue of future work the study of asynchronous approaches using multiple servers.\\n\\nWe have updated the draft accordingly (the new version should appear shortly).\\n\\nThank you very much.\"}", "{\"review\": \"Hello reviewers,\", \"we_would_like_to_bring_your_attention_to_a_similar_paper_my_colleagues_and_i_submitted_to_this_iclr_workshop_track\": \"\", \"title\": \"GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training\", \"link\": \"http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667\\n\\nBoth papers explore using many GPUs for training convnets in using an ASGD framework. \\n\\nIn this paper, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD).\\n\\nIn ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use.\\n\\nOurs work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs.\\n\\nWe bring this up because a reviewer has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing.\\n\\nBest,\\nTom\"}" ] }
iiu7beeAJGnAl
Deep Learning Embeddings for Discontinuous Linguistic Units
[ "Wenpeng Yin", "Hinrich Schütze" ]
Deep learning embeddings have been successfully used for many natural language processing (NLP) problems. Embeddings are mostly computed for word forms although a number of recent papers have extended this to other linguistic units like morphemes and phrases. In this paper, we argue that learning embeddings for discontinuous linguistic units should also be considered. In an experimental evaluation on coreference resolution, we show that such embeddings perform better than word form embeddings.
[ "embeddings", "discontinuous linguistic units", "deep learning embeddings", "learning embeddings", "nlp", "problems", "word forms", "number", "recent papers" ]
submitted, no decision
https://openreview.net/pdf?id=iiu7beeAJGnAl
https://openreview.net/forum?id=iiu7beeAJGnAl
ICLR.cc/2014/workshop
2014
{ "note_id": [ "GzFZGSMOZpzOk", "AKHVlzzUO5Ao2", "fC0gfomYgEfhk", "azLZaQQbZvaIp" ], "note_type": [ "review", "review", "review", "comment" ], "note_created": [ 1391716200000, 1392649080000, 1391850600000, 1392648720000 ], "note_signatures": [ [ "anonymous reviewer e7ba" ], [ "Wenpeng Yin" ], [ "anonymous reviewer 6104" ], [ "Wenpeng Yin" ] ], "structured_content_str": [ "{\"title\": \"review of Deep Learning Embeddings for Discontinuous Linguistic Units\", \"review\": \"This paper explores simple ways to embed linguistic units composed of discontiguous words such as 'HELP TO' in the sentence 'Paul HELPS me TO write my paper'. The frequency of occurrence of such discontiguous units is very language dependent (high in German, lower in English). The authors propose a method that essentially amounts to rewriting the sentence in a manner that considers such units as a single word and using Mikolov's vec2word code. Experiments show that such embeddings perform better on a simple task, namely classifying entities are animated or non-animated. In my opinion this is a very preliminary work at this stage. Neither the claim not the results are very surprising.\"}", "{\"review\": \"We were happy to hear that the reviewer thinks that our idea of inducing representations for disjoint linguistic units is novel and holds potential!\\n\\n1)'strange to use word2vec'\\nOur motivation for word2vec was to use the best currently available method for distributed representations. Embeddings perform better than other distributed representations on several tasks and embeddings induced by word2vec have been particularly successful.\\nWe would appreciate further thoughts on why we should use word-document matrix factorization as opposed to a stronger method like word2vec for learning representations.\\n\\n2)'skip-gram learning algorithm doesn\\u2019t make sense'\\nWe would appreciate if the reviewer could expand on this point. Is the reason that representations of verbs should, in the reviewer's view, be induced using a 'sequence-sensitive' learning algorithm (since bag-of-words is often viewed as more appropriate for nouns)?\\n\\nThis is a good point. However, the skip-gram model seems to be successful to some extent in learning sequence-dependent information, possibly because the sampling is position dependent, giving a preference to close words. For example, singular and plural forms are systematically related, which one would not expect from a true bag-of-words model.\\n\\n3) 'why this task is chosen is unclear'\\nAs we discuss in the paper, the task of predicting (human) animacy from context is useful for coreference resolution (because pronouns like 'him' and 'she' can only refer to human animate entities).\\n\\nWe agree though that we should also present results for a standard task. We are currently running experiments on paraphrase identification: http://www.aclweb.org/aclwiki/index.php?title=Paraphrase_Identification_%28State_of_the_art%29 and will present results at the workshop if the paper gets accepted.\\n\\n4) 'paper is short'\\nWe were trying to comply with the length restrictions of the ICLR 2014 workshop track. If the reviewer could be more specific as to which parts of the description of the evaluation task and of the conclusion need to be expanded, we would be very glad to fix these problems and submit a revised version to arxiv.\\n\\n5) 'No visualization or control experiments to understand the learned representations'\\nOur control experiment was supposed to be the single-word baseline. A visualization would certainly improve the paper. Again, we were trying to comply with the length restrictions.\"}", "{\"title\": \"review of Deep Learning Embeddings for Discontinuous Linguistic Units\", \"review\": [\"Summary\", \"This paper proposes learning representations for discontinuous pairs of words in a sentence. Representations for such linguistic units such as \\u201chelped*to\\u201d are potentially more useful than bigrams or other units for particular NLP tasks. Rather than introducing a new algorithm to induce such representations, they alter a text corpus and use a skip-gram training algorithm. Representations are compared against previous word representation approaches on a task of classifying markables.\", \"Review\", \"Generally the idea of inducing representations for disjoint linguistic units is novel, and seems to hold good potential. It seems strange to use word2vec which is a skip-gram algorithm to induce such representations. The process of creating fake \\u2018sentences\\u2019 with disjoint units to induce skip grams seems hacky. I would prefer to see a more straightforward approach, such as one based on token-document matrix factorization, to induce representations for the disjoint tokens. The evaluation task is obscure, and why this task is chosen is unclear. The authors should include experimental evaluation, visualization, or controlled experiments on at least one more standard task. Generally, there is a kernel of an interesting idea in this paper but the work needs a more thorough investigation into the representation learning algorithm used and evaluation.\", \"Key points\", \"Interesting linguistic idea\", \"Use of pre-existing word vector learning package makes experiments seemingly easy to reproduce\", \"Using a skip-gram learning algorithm doesn\\u2019t make sense. A matrix factorization or other similar approach seems more natural\", \"Non-standard and somewhat difficult to understand evaluation\", \"No visualization or control experiments to understand the learned representations\", \"Paper is short to the point of lacking sufficient descriptions of the evaluation task and conclusions\"]}", "{\"reply\": \"1)'frequency of ... discontiguous units'\\n\\n It may be the case that discontiguous units are more frequent in other languages. However, phrasal verbs are one of the most important verb groups in English, verbs like 'keep up' and 'take off'. Without discontiguous units, the 'vacation' meaning of 'I took the month off' is difficult to infer from the vectors of 'took' and 'off'. Our approach\\nlearns a vector for 'took ... off', thus facilitating correct inference.\\n This shows that having appropriate representations for phrasal verbs is important.\\n\\n2) 'Neither the claim not the results are very surprising.'\\n We certainly agree that our results are not earth shattering. However, for all NLP work that represents linguistic input as embeddings or other distributed representations, the question of how the linguistic input should be parsed into units (which then are represented as vectors) must be addressed. We cite recent work on morphology and on phrases in this vein.\\n Within this line of work research on discontinuous units (as a third type of possible unit, in addition to stems/affices and continuous phrases) is well motivated.\\n Perhaps positive results for this new type of unit are to be expected. Still, we feel it is a contribution to confirm the hypothesis that they are useful.\"}" ] }
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
submitted, no decision
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
ICLR.cc/2014/workshop
2014
{ "note_id": [ "ggC0Jddw0PgjR", "wvfSwIGySBvdV", "O3dl_qqbXKYbU", "pZNfKh1T62ZEM", "FdstFZk6tuIGQ", "f4NY4EleY1FeR", "K9CR9nnAg6-4W" ], "note_type": [ "comment", "review", "review", "comment", "comment", "review", "review" ], "note_created": [ 1392658680000, 1392824460000, 1390789980000, 1393554060000, 1392674520000, 1392674280000, 1391787900000 ], "note_signatures": [ [ "Irina Sergienya" ], [ "Irina Sergienya" ], [ "anonymous reviewer 4d8c" ], [ "anonymous reviewer 4d8c" ], [ "Irina Sergienya" ], [ "Irina Sergienya" ], [ "anonymous reviewer 4683" ] ], "structured_content_str": [ "{\"reply\": \"Dear reviewer,\\nThank you for your review and comments!\\n\\nYour idea to initialize the embeddings with zero is interesting, but in the word2vec setup we are using embeddings initialized as zero remain zero vectors during training. We confirmed this in an experiment in which we\\ninitialized the vectors as zero vectors.\\n\\nWe would like to ask whether this is the setting you were suggesting or did you have something else in mind?\\n\\nRegards, Irina\"}", "{\"review\": \"We uploaded a new version of paper with the Related work section, where a relevant work of Hai Son Le et al. (EMNLP2010) is discussed.\\n\\nThanks to the reviewer who pointed out this paper.\\n\\nRegards, Irina\"}", "{\"title\": \"review of Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds\", \"review\": \"This paper investigates the use of so-called distributional models of words to improve the quality of learned word embeddings. These models are essentially high-dimensional vector representations of words, which indicate with what other words in the vocabulary has a word cooccurred (within some context). Since word embeddings can be understood as a linear lower-dimensional projection of the basic one-hot representations of words, these distributional model representations can be exploited by trainable word embeddings algorithms by simply concatenating them to the basic 'one-hot' representation or by replacing the one-hot representation by the distributed one for infrequent words. This paper shows that this approach can improve the quality of the word embeddings, as measured by an average correlation with human judgement of word similarity.\\n\\nOverall, I think this is a good workshop paper. Learning good words embeddings is an important topic of research, and this paper describes a nice, fairly simple trick to improve word embeddings. \\n\\nOne motivation for using it seems to be that infrequent words will not be able to move far enough away from their random initialization, so I wonder whether initializing all these word embeddings to 0 instead might have been enough to solve this problem. I think this would be a good baseline to compare with. \\n\\nI would also have liked to see whether any gains are obtained in a real NLP task, i.e. confirm that better correlation with human judgement is actually giving us something in practice.\\n\\nBut these are overall minor problems, for a workshop paper.\"}", "{\"reply\": \"This is indeed what I had in mind... And indeed, in the word2vec setup, they'll stay at 0 (because all gradients will be zero, in the word2vec parametrization, which isn't necessarily the case in, say, a neural network language model)... I had not realized this.\"}", "{\"reply\": \"Dear reviewer,\\nPlease, find our reply for your comments below.\"}", "{\"review\": \"Dear reviewer,\\nThank you for your review and comments!\\n\\nIndeed, there are systems with performance on MEN and WordSim higher than our numbers. Right now we are running experiments on a bigger training corpus. The preliminary numbers are much closer to the state-of-the-art performance. We will present results at the conference if\\nthe paper gets accepted.\\n\\nThank you for pointing out the paper of Hai Son Le et al, which is very relevant. They propose three initialization schemes. Two of them, re-initialization and iterative re-initialization, use vectors from prediction space to initialize the context space during training. This approach is both more complex and less efficient than ours. \\n\\nThe third initialization scheme, one vector initialization, initializes all word embeddings with the same random vector: this helps to keep rare words close to each other as an outcome of rare updates. However, this approach is also less efficient than ours since the initial embedding is much denser than in our approach. We are planning to run experiments with this approach and should be able to present results at the conference if the paper is accepted.\\n\\nRegards, Irina\"}", "{\"title\": \"review of Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds\", \"review\": \"This paper proposes to derive distributional representation for words\\nthat can be used to improve word embeddings. Distributional vectors\\ncan present to the neural network that learns embeddings, instead of\\npresenting one-hot vectors. One motivation is that distributional\\nrepresentation could make the learning task easier for rare words. The\\nauthors apply this approach only to rare words since word embeddings\\nfor frequent words is frequently updated and then can be considered as\\nsatisfactory.\\n\\nThe idea is nice. However, my main concern is about the experimental\\npart. I don't understand the results. For the 'WordSim' task, the\\npaper of E. Huang (ACL2012) exhibits spearman correlation above 50. So\\nwether the results are incredibly below the baseline systems used in\\n2012 (and thus, what can we conclude from this paper since it is\\nstraightforward to improve a very poor system), or this need\\nclarification. Anyway, baseline exists and should be mentioned.\\n\\nA minor comment about the last paragraph of the introduction. The\\npaper (Hai Son Le et al. at EMNLP2010) addressed the issue of the\\ninitialization of word embeddings and this seems to perform quite well\\nespecially for rare words.\"}" ] }
8yYIVxPr6xHht
Deep learning for class-generic object detection
[ "Brody Huval", "Adam Coates", "Andrew Ng" ]
We investigate the use of deep neural networks for the task of class-generic object detection. We show that neural networks originally designed for image recognition can be trained to detect objects within images, regardless of their class, including objects for which no bounding box labels have been provided. In addition, we show that bounding box labels yield a 1% performance increase on the ImageNet recognition challenge.
[ "object detection", "objects", "deep learning", "use", "deep neural networks", "task", "neural networks", "image recognition", "images", "class" ]
submitted, no decision
https://openreview.net/pdf?id=8yYIVxPr6xHht
https://openreview.net/forum?id=8yYIVxPr6xHht
ICLR.cc/2014/workshop
2014
{ "note_id": [ "Wb6A9kK0nsWYL", "ZZw4ZaGzdyt0c" ], "note_type": [ "review", "review" ], "note_created": [ 1392165720000, 1390860900000 ], "note_signatures": [ [ "anonymous reviewer 22fb" ], [ "anonymous reviewer 7d29" ] ], "structured_content_str": [ "{\"title\": \"review of Deep learning for class-generic object detection\", \"review\": \"This is an interesting paper that shows that using a pretrained object classification network's weights to initialize an object localization network leads to improved performance of the latter.\\n\\nOf course the comparisons are made for a fixed (but unspecified) architecture. It seems that the architecture chosen is that of an object classifier, and it is not clear at all that this is a good architecture for an object localizer. (In particular pooling is sensible for the former but makes less sense for the latter). Thus it's not really clear that the gains are that useful - perhaps it would just be better to design a network for the task in hand. That is not addressed in this paper at all.\\n\\nThe writing itself is clear, but there are a several significant omissions in the paper. \\n\\nTable 1 caption is unclear, and seems broken.\\n\\nSection 4 'multiple GPUs' could be elucidated a little more? \\n\\nYou say the output for your bounding box training is discretized, but don't say how. It's not even clear if you output one-hot in the cross-product space or discretize each dimension separately. \\nWhile your main result is a relative improvement, this is still a fundamental omission that must be rectified. It's hard to interpret your results without knowing the resolution or the scoring system that you use. \\n\\nYou don't explain the parameters of the Gaussian used for the labels, nor what you mean by 'multiple bounding boxes' - You've not described that possibility.\\nAre 7% of objects labelled with bounding boxes, or only 7% of images? \\nWhere you have multiple objects in an image are you guaranteed to have the bounding boxes for all of them if you have any of them? (I presume not, since 'what is an object' is ambiguous - but you don't discuss this ambiguity).\\n\\nFigure 1 legend says 'without bounding boxes' but I presume you mean 'excluding the held-out set' - you train with the other 900 classes' bounding boxes? \\n\\nYou don't discuss the details of training at all, (not even the number of nodes in the hidden layers or convolution windows, never mind learning rates etc) nor make it clear just how much is the same as the other references (Coates / Krizhevsky) you cite. ('Similar to' Krizhevsky is all you say)\\nIn particular you need to explain how you transition from pretraining to 'training' - initialization of the softmax weights is the same as starting from scratch? What about: Learning rate / momentum & their schedules ? \\n\\nBibliography is broken everywhere e.g. with commas separating first and last names as well as authors' names from each other. \\nFirst reference has no source.\", \"junk_like_this\": \"'In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. .... 2009'\\n\\nThe first page indicates that this paper is in Proc ICML and in JMLR. I presume this is just a LaTeX oversight?\"}", "{\"title\": \"review of Deep learning for class-generic object detection\", \"review\": \"This abstract investigates the idea of learning an object detection model that does not depend on the class, hence being able to generalize to any number of classes, including classes unknown at training time. The idea is compelling but the paper is short on details and results. We don't know how many bounding boxes are used in the softmax, we don't have the details about the Gaussian used to smooth the targets of the softmax (how picky is it?); The paper does not compare to similar approaches (like Szegedy et al 2013). I feel like this is an interesting idea relevant for the workshop track and hope it will be improved later with more details.\"}" ] }
VNYDEas7tlE75
Occupancy Detection in Vehicles Using Fisher Vector Image Representation
[ "Yusuf Artan", "Peter Paul" ]
Due to the high volume of traffic on modern roadways, transportation agencies have proposed High Occupancy Vehicle (HOV) lanes and High Occupancy Tolling (HOT) lanes to promote car pooling. However, enforcement of the rules of these lanes is currently performed by roadside enforcement officers using visual observation. Manual roadside enforcement is known to be inefficient, costly, potentially dangerous, and ultimately ineffective. Violation rates up to 50%-80% have been reported, while manual enforcement rates of less than 10% are typical. Therefore, there is a need for automated vehicle occupancy detection to support HOV/HOT lane enforcement. A key component of determining vehicle occupancy is to determine whether or not the vehicle's front passenger seat is occupied. In this paper, we examine two methods of determining vehicle front seat occupancy using a near infrared (NIR) camera system pointed at the vehicle's front windshield. The first method examines a state-of-the-art deformable part model (DPM) based face detection system that is robust to facial pose. The second method examines state-of- the-art local aggregation based image classification using bag-of-visual-words (BOW) and Fisher vectors (FV). A dataset of 3000 images was collected on a public roadway and is used to perform the comparison. From these experiments it is clear that the image classification approach is superior for this problem.
[ "vehicles", "lanes", "vehicle", "image classification", "occupancy detection", "high volume", "traffic", "modern roadways" ]
submitted, no decision
https://openreview.net/pdf?id=VNYDEas7tlE75
https://openreview.net/forum?id=VNYDEas7tlE75
ICLR.cc/2014/workshop
2014
{ "note_id": [ "Q59R5LrpPySae", "AAN9jhVxFznx6" ], "note_type": [ "review", "review" ], "note_created": [ 1391479860000, 1391833620000 ], "note_signatures": [ [ "anonymous reviewer fca6" ], [ "anonymous reviewer bde7" ] ], "structured_content_str": [ "{\"title\": \"review of Occupancy Detection in Vehicles Using Fisher Vector Image Representation\", \"review\": \"This paper addresses the problem of detecting people in the front passenger seat of a car as a step along the way to help enforce carpooling rules. The paper presents experiments comparing different approaches. In particular the paper explores solving the problem through using: a) Zhu and Ramanan's deformable part model for face detection, and using the detections for the final result, b) the use of Fisher Vectors in the sense of Jaakkola and Haussler, (1998), c) a variant of the widely used bag of visual words (BoW) representation, and d) a technique referred to as Vectors of Locally Aggregated Descriptors (VLAD). Techniques b), c) and d) use a traditional SVM approach for the final classification. The Fisher vector technique appears to have better performance compared to the other methods explored in this paper.\\n\\nThe paper looks at a concrete practical application and explores ways for solving the problem that are fairly in line with current practices in computer vision. The face detection based comparison is certainly important and interesting. However, it is not clear from the current manuscript exactly how the model of Zhu and Ramanan was trained for the experiments here. Was the face detector trained on the training set defined here? Or, was the face detector used with the pre-learned parameters as distributed on their website? The performance could be dramatically different depending on how the method was trained. The size of the test train splits for the other experiments are also not given and the issue of SVM hyper-parameter tuning is not discussed.\\n\\nThe Fisher feature vector technique presented here could be viewed as a form of representation learning, but this is really more of a vision application and method comparison paper as opposed to a paper deeply exploring aspects of representation learning. The paper also has some language problems.\"}", "{\"title\": \"review of Occupancy Detection in Vehicles Using Fisher Vector Image Representation\", \"review\": \"This paper is about classifying the presence or absence of a person on the front seat of a car. The main point of the paper is to compare the approach of using face detection and directly classifying the front seat image. They improve accuracy from 92% to 96% with full image classification. It also compares different aggregation methods on top of hand-designed features like SIFT: bag of words, fisher vector or VLAD, and shows better results using Fisher vectors. Other work has also been using more than just face detection but not the entire window itself. The novelty of this work is very limited and is mostly in the trivial idea of using the entire passenger image for classification.\", \"pros\": [\"improving accuracy to 96% over face approach (92%).\"], \"cons\": [\"no novelty.\", \"no representation learning.\"]}" ] }
3QxgDBPQxIp7y9wltPq9
Fully Convolutional Nerual Network for Body Part Segmentation
[ "David Frank", "Richard Kelley", "David Feil-Seifer" ]
This paper presents the foundation of a new system for human body segmentation. It is based on a Fully Convolutional Neural Network that uses depth images as input and produces a per-pixel labeling of the image where each pixel has been labeled as a body segment of interest or as non-person. The training data are fully synthetic which allow for large amounts of data to be generated in a relatively short period of time. By using a GPU accelerated implementation of the convolutional neural network, the system is capable of segmenting an image in 8.5 milliseconds. This work will form the basis for more robust system in the future that will be suitable for finding pose skeletons in more cluttered environments.
[ "convolutional nerual network", "body part segmentation", "convolutional neural network", "image", "foundation", "new system", "human body segmentation", "depth images", "input", "labeling" ]
https://openreview.net/pdf?id=3QxgDBPQxIp7y9wltPq9
https://openreview.net/forum?id=3QxgDBPQxIp7y9wltPq9
ICLR.cc/2016/workshop
2016
{ "note_id": [ "ZY9A3BkDNH5Pk8ELfEPX", "zvwrjqVAptM8kw3ZinoB", "3QxWx1q1yTp7y9wltP2P" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457681973068, 1457639218344, 1456960012691 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/146/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/146/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/146/reviewer/10" ] ], "structured_content_str": [ "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper describes a method for segmenting body parts in depth images of humans in real time. It uses an existing neural network architecture and a simple loss that has been used before for segmentation tasks. The model is trained on synthetic data and applied to Kinect camera images.\\n\\nThe paper is quite straightforward as far as segmentation work goes. I believe none of the components are novel, the only potential novelty is the application domain (segmenting depth images rather than RGB images). More segmentation work on RGB images could be cited, for example: \\nVijay Badrinarayanan, Alex Kendall and Roberto Cipolla \\\"SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.\\\", which uses a more advanced network architecture for a seemingly harder task.\", \"there_are_strong_assumptions_in_the_approach\": \"a single body type was used for training and it's placed at a fixed position relative to the camera. At test time, it is assumed that it is possible to accurately remove the background in scenes, although I am not convinced that this is the case for cluttered scenes.\\n\\nThe experimental results are lacking numerical analysis on real data-- only a single result for eyeballing is presented. It would have been helpful to show more results along with the failures of the model in more detail. On synthetic data, it would have been helpful to show how accuracy varies with an increasing amount of noise. Most body parts other than torso have accuracy in the 60-70% range which is not necessarily very high. Finally, if the goal is to estimate body skeletons, why not directly regress to the 3D joint locations, like \\\"DeepPose: Human Pose Estimation via Deep Neural Networks\\\" by Alexander Toshev, Christian Szegedy. \\n\\nFor an application paper, I would have expected a little more in depth experimental analysis. Right now it's a bit thin even for a workshop submission.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This works at segmenting body parts from depth images using a convolutional network. The authors propose a modification of an existing architecture designed to fit on a \\\"consumer grade GPU.\\\" The authors train this network on synthetically generate data and applied on images captured by the Kinect camera.\\n\\nMost of the evaluation was done on synthetic data, and no effort was made to establish the quality of the segmentation of real images except for visually inspecting the results on a few such images.\\n\\nOn the plus side, I think the results seem very reasonable on the synthetic dataset. On the down-side, I am very worried about the transfer capabilities which were not evaluated except by eyeballing, and the quirks in the training dataset (i.e., a single 3D model was used, arbitrary transforms were performed on the limbs, which may or may not reflect plausible positions for humans).\\n\\nI see a lot of potential in this work, but it simply seems much too early to publish even as a workshop paper. I would have hoped for a more detailed explanation of the network and maybe some example images where the network fails to segment correctly the limbs + additional commentary on why is this happening. In addition, it would have made a lot of sense to also discuss in slightly more detail of the choice to put the 3D subject at a fixed distance from the camera, and maybe also explain whether the focal length of the 3D sensor has any effect on the segmentations produced by this algorithm.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This work aims to segment human body parts in depth images in real-time, using the approach of training a convolutional network on synthetic data. This is a nice, simple approach and obtains what looks like promising initial results. Significant gaps remain in order to turn this in a working system, most notably variation in body shape; however, the authors acknowledge this and it looks good for a preliminary result.\\n\\nThe paper seems a bit on the light side, even for an extended abstract, and I think it could be fleshed out a some more. One possible suggestion, what do the results look like using the current system for a downstream joint or skeleton estimation? Also, while accuracy numbers are reported, it is hard to place them in context without some baseline comparisons (maybe Shotton et al.?)\\n\\nOverall, this is a simple system with what looks like good preliminary results, but I think would be stronger if the presentation in the paper were a little further developed.\", \"a_few_additional_comments\": [\"\\\"final layer of this network was shaped such that it had the same height and width as the input\\\": Do the max-pooling layers downsample the resolution of the output? Fig 2 shows that the network output is upsampled -- is the upsampled result what is reshaped? How much spatial downsampling is there within the network? Additional details of the network structure and sizes would be good to include.\", \"\\\"structure of the network was similar to Long ... skip layers were also not inculded\\\": The skip-layers combining scales by adding is, I think, actually the largest defining feature of Long et al., so I'm not sure I'd call this a \\\"similar\\\" network. It is a ConvNet that is simple and geared to the task, though.\", \"It would be good to show comparisons with other systems, at least Shotton et al.\", \"Relevant reference re: pose estimation with convnets: Tompson et al., \\\"Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation\\\", NIPS 2014.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
VAVqG11WmSx0Wk76TAzp
Learning to SMILE(S)
[ "Stanisław Jastrzębski", "Damian Leśniak", "Wojciech Marian Czarnecki" ]
This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.
[ "natural language processing", "nlp", "methods", "problems", "cheminformatics", "connection", "separate fields", "standard textual representation", "compound", "smiles" ]
https://openreview.net/pdf?id=VAVqG11WmSx0Wk76TAzp
https://openreview.net/forum?id=VAVqG11WmSx0Wk76TAzp
ICLR.cc/2016/workshop
2016
{ "note_id": [ "yoW88O3o6tr682gwszyW", "k80qQJLKLfOYKX7ji4Q3", "q7kg0385MS8LEkD3t7NQ" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1458858094330, 1457038462114, 1457637290140 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/173/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/173/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/173/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Good work on applying sequence classification advancements to new applications\", \"rating\": \"7: Good paper, accept\", \"review\": \"This works shows that SMILES (walks across a graph of atomic connections) allows the advancements in text classification to be brought to cheminformatics with good results compared to using the latest hand-tuned features. The idea is not particularly novel but the results show how an unrelated area can fairly easily benefit from advancements in DL NLP. The intuition that localised features determine molecular binding seems like a great fit for sentiment analysis techniques.\\n\\nIt would be nice to show the molecule lengths and sizes of dataset in Table 1, I don't think these are mentioned anywhere. It would also be nice to try newer sequence prediction techniques such as LSTMs (possibly with pretraining).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good idea for using text models on string representations of molecules\", \"rating\": \"7: Good paper, accept\", \"review\": \"The idea is excellent, although somewhat obvious. The paper describes training character-based text models directly on SMILES files that encode chemical graphs as strings and then making predictions about the molecules.\", \"reasons_to_accept\": [\"good idea that works at least somewhat\"], \"reasons_to_reject\": [\"limited empirical evaluation\", \"Need to explain the SMILES format well enough that the data augmentation procedure is clear. How is the walk determined normally?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"well done\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"the analogy between CI and sentiment analysis is intriguing and potentially fruitful. the community will definitely appreciate this work. hopefully authors can increase data resources in follow-up work to further improve performance over classifiers with engineered features.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
mO9m5Rrm6tj1gPZ3UlOX
Contextual convolutional neural network filtering improves EM image segmentation
[ "Xundong Wu", "Yong Wu", "Ligia Toro", "Enrico Stefani" ]
We designed a contextual filtering algorithm for improving the quality of image segmentation. The algorithm was applied on the task of building the Membrane Detection Probability Maps (MDPM) for segmenting electron microscopy (EM) images of brain tissues. To achieve this, we executed supervised training of a convolutional neural network to recover the ground-truth label of the masked-out center pixel from patches sampled from an un-refined MDPM. Through this training process the model learns the distribution of the segmentation ground-truth map . By applying this trained network over MDPMs we are able to integrate contextual information to obtain with better spatial consistency in the high-level representation space. By iteratively applying this network over the MDPMs for multiple rounds, we were able to significantly improve the EM image segmentation results.
[ "mdpm", "mdpms", "able", "contextual filtering algorithm", "quality", "image segmentation", "algorithm" ]
https://openreview.net/pdf?id=mO9m5Rrm6tj1gPZ3UlOX
https://openreview.net/forum?id=mO9m5Rrm6tj1gPZ3UlOX
ICLR.cc/2016/workshop
2016
{ "note_id": [ "vlpO4kGWBu7OYLG5inyZ", "ANYym5MWWSNrwlgXCqMV", "D1VM0VvvqS5jEJ1zfERV" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1458134889538, 1458057144427, 1458221343179 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/176/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/176/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/176/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Review of Contextual convolutional neural network filtering improves EM image segmentation\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper presents an iterative method to progressively clean up segmentation maps, by repeatedly applying a CNN to the output of the previous iteration. The novelty seems to lie in the application domain rather than in the network/model/training regime. The paper is quite unclearly written: I was unsure what the architecture of the network is, and how the iteration is applied. For the results, only 1 example is provided, which is insufficient, even for a workshop paper.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review of CONTEXTUAL CONVOLUTIONAL NEURAL NETWORK FILTERING IMPROVES EM IMAGE SEGMENTATION\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The description of the authors approach is not clear and would need to be re-writen. As far as I understand, they generate pixel probability maps to belong to the foreground/background that they give as input of the CNN of Ciceran et al. 2012 instead of the image patches, and iteratively use the obtained output to feed again Ciseran et al's CNN.\\n\\nHowever, I am not sure I fully understand what the authors really did because I don't see why they write \\u201cIt is also important to point out that training with the ground-truth map directly provided no benefit in improving the segmentation quality\\u201d The authors evaluate the results of their approach on the neuronal image dataset from the ISBI 2012 challenge.\\n\\nThe novely of the approach is not high but sufficient for a workshop submission and results are convincing, the major remaining problem remains the clarity of the paper.\", \"missing_related_work\": \"Turaga et al, Convolutional networks can learn to generate affinity graphs for image segmentation, 2010.\", \"minor\": \"abstract: no space before a dot by training -> train introduction of I-CNN: we don't understand in the first reading that it refers to the authors method proposition\\n\\nIn Jurrus et al. (2010); Pinheiro & Collobert (2013); Lee et al. (2015); Tu\\n\\n(2008); Tu & Bai (2010), they applied\\u2026 -> Jurrus et al. (2010); Pinheiro & Collobert (2013); Lee et al. (2015); Tu (2008); Tu & Bai (2010) applied \\u2026\\n\\nFigure1 -> Figure 1\\n3 SYSTEM DESCRIPTION AND RESULT -> 3 SYSTEM DESCRIPTION\", \"reference_list\": \"please remove the reference that are not cited in the text or add citations in the text.\\n\\nhao -> Hao\\nwater-shed -> watershed\", \"in_the_conclusion\": \"\\u201cthe new algorithm\\u201d -> the new procedure\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper describes a method to clean up and improve boundary maps obtained with CNNs by processing them with additional CNNs. The reported results seem impressive (although, I have never worked on this application). Unfortunately, the description of the method is essentially lacking. From what I understood, the method is very similar to several previous works including:\\n\\nVolodymyr Mnih, Geoffrey E. Hinton:\\nLearning to Detect Roads in High-Resolution Aerial Images. ECCV (6) 2010 (not cited)\", \"a_seminal_paper_from_the_pre_deep_learning_era\": \"\", \"zhuowen_tu\": \"Auto-context and its application to high-level vision tasks. CVPR 2008 (cited)\\n\\nPerhaps, the detailization of the algorithm and the amount of novelty are not sufficient for acceptance to ICLR that focuses on new approaches and algorithms for learning representations. However I can imagine that venues/conferences/workshops for people working on membrane segmentation/connectomics would be interested.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
k80kn82ywfOYKX7ji42O
HARDWARE-FRIENDLY CONVOLUTIONAL NEURAL NETWORK WITH EVEN-NUMBER FILTER SIZE
[ "Song Yao", "Song Han", "Kaiyuan Guo", "Jianqiao Wangni", "Yu Wang" ]
Convolutional Neural Network (CNN) has led to great advances in computer vision. Various customized CNN accelerators on embedded FPGA or ASIC platforms have been designed to accelerate CNN and improve energy efficiency. However, the odd-number filter size in existing CNN models prevents hardware accelerators from having optimal efficiency. In this paper, we analyze the influences of filter size on CNN accelerator performance and show that even-number filter size is much more hardware-friendly that can ensure high bandwidth and resource utilization. Experimental results on MNIST and CIFAR-10 demonstrate that hardware-friendly even kernel CNNs can reduce the FLOPs by 1.4x to 2x with comparable accuracy; With same FLOPs, even kernel can have even higher accuracy than odd size kernel.
[ "filter size", "convolutional neural network", "cnn", "flops", "great advances", "computer vision", "embedded fpga", "asic platforms" ]
https://openreview.net/pdf?id=k80kn82ywfOYKX7ji42O
https://openreview.net/forum?id=k80kn82ywfOYKX7ji42O
ICLR.cc/2016/workshop
2016
{ "note_id": [ "nx937z6nMu7lP3z2ioNm", "5QzBR8G84FZgXpo7i324" ], "note_type": [ "review", "review" ], "note_created": [ 1456589526138, 1457552302912 ], "note_signatures": [ [ "~Lingxi_Xie1" ], [ "ICLR.cc/2016/workshop/paper/122/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"This paper provides an interesting and instructive discussion to the industrial community\", \"rating\": \"7: Good paper, accept\", \"review\": \"In this paper, the authors present a fact that neural networks may become less efficient when odd-sized convolution kernels (like 3x3, 5x5 kernels) are used. The main consideration is from the implementation of the inner-product operation in hardware.\", \"figure_1_is_quite_intuitive\": \"one can catch the main idea by taking a glance at it.\", \"experimental_results_are_acceptable\": \"with smaller kernels, the recognition performance is comparable while the FLOPs are effectively reduced. It would be better if this idea is verified on some larger experiments such as SVHN and ImageNet.\\n\\nMinor things. (1) The mathematical notations can be more formal: in representing the network structure (20Conv5 ...), please use \\\\rm{Conv} or \\\\mathrm{Conv}, also please replace all the 'x' to '\\\\times' (in 'Conclusion'). (2) Please magnify the font size in both figures, until one can read it clearly on a printed version of the paper. (3) The fonts of digits in Figure 2(a) and Figure 2(b) are different, which is weird.\\n\\nIn conclusion, this is a good workshop paper that tells the community a simple yet useful fact.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"This paper brings attention to the fact that even-number filter sizes can maximize the efficacy of CNN accelerators.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors bring attention to the fact that odd-number filter sizes waste computational resources; even-number filter sizes can maximize the efficacy of CNN accelerators. They are able to reduce the complexity of LeNet and VGG11-Nagadomi network with comparable performance in accuracy.\\n\\nFigure 1 is very good to understand what the paper is about. \\n\\nFigure 2, on the other hand, is hard to understand; caption doesn't provide enough information. \\nFor Figure 2a, why are there two sizes for each test error and normalized complexity bars? If they are the size of the first and second layer filters, why do 8x8, 4x4 filters have less complexity compared to 4x4, 4x4 filters?\\nIn Figure 2b there are two bar sets for 2x2 filters, later in the text it appears that one uses more feature maps, this information should be at least in the caption if not in the chart.\\n\\nOverall, the idea is useful and good to keep in mind.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
91EowxONgIkRlNvXUVog
Lookahead Convolution Layer for Unidirectional Recurrent Neural Networks
[ "Chong Wang", "Dani Yogatama", "Adam Coates", "Tony Han", "Awni Hannun", "Bo Xiao" ]
Recurrent neural networks (RNNs) have been shown to be very effective for many sequential prediction problems such as speech recognition, machine translation, part-of-speech tagging, and others. The best variant is typically a bidirectional RNN that learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting (e.g., in a speech recognition system), because they need to see an entire sequence before making a prediction. We introduce a lookahead convolution layer that incorporates information from future subsequences in a computationally efficient manner to improve unidirectional recurrent neural networks. We evaluate our method on speech recognition tasks for two languages---English and Chinese. Our experiments show that the proposed method outperforms vanilla unidirectional RNNs and is competitive with bidirectional RNNs in terms of character and word error rates.
[ "lookahead convolution layer", "entire sequence", "bidirectional rnns", "convolution layer", "rnns", "effective", "speech recognition" ]
https://openreview.net/pdf?id=91EowxONgIkRlNvXUVog
https://openreview.net/forum?id=91EowxONgIkRlNvXUVog
ICLR.cc/2016/workshop
2016
{ "note_id": [ "4QygYX3XwhBYD9yOFqMA", "vlpOAAvVMh7OYLG5inyv", "K1VM5mjK2C28XMlNCVGP" ], "note_type": [ "review", "comment", "review" ], "note_created": [ 1458068507740, 1458150880676, 1458066643770 ], "note_signatures": [ [ "~Navdeep_Jaitly1" ], [ "~Dani_Yogatama1" ], [ "ICLR.cc/2016/workshop/paper/125/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Simple idea, some details are unclear that make it difficult to assess gains.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Instead of computing the output for a unidirectional RNN at time point t using only the hidden state of the layer below at the same time, the paper proposes to use point-wise multiplication and addition from future states of a unidirectional RNN to compute the hidden states at the next layer (this is akin to a separate convolution on each of the feature dimensions). This, the authors argue gets it closer to bidirectional RNNs.\\n\\nThe idea is simple, and the paper seems to show results that there are gains from using the approach, but important details are missing that make it hard to judge whether the gains come from the model or not. \\n\\nSpecifically, the authors say on page that \\\"The next five layers are either all unidirectional (forward) or all bidirectional recurrent layers\\\" From this I would assume that row 1 of the paper is all unidirectional, and row 3 is all bidirectional, while row 2 is all unidirectional, except for the last layer which is a \\\"look-ahead convolution\\\" If that's the case the results are good. \\n\\nHowever, the next lines \\\"We also compare with two baselines constructed by replacing the second-to-last layer with either a unidirectional recurrent layer or bidirectional recurrent layer\\\", make we wonder if this is really the case; the statement leaves open the possibility that Row 1 is bidirectional all the way, and then unidirectional, followed by the softmax, while Row 2 is bidirectional all the way and then a look-ahead convolutional layer etc... this result would be less convincing since it does get bidirectional inputs to the top layer..\\n\\nAn obvious comparison would have been unidirectional all the way, look-ahead convolutional all the way and bidirectional all the way. I'm surprised this isn't the one that is offered. And if it is indeed the one that is offered, the paper should writh the model section in such a way that its clearer.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to review 12\", \"comment\": \"Thank you for your helpful comments.\\nWe would like to clarify that for the results in Table 1, row 1 is all unidirectional, row 3 is all bidirectional, and row 2 is all unidirectional except for the last layer.\\nThank you for your suggestion of an additional type of networks (all lookahead convolution), we will consider this.\\nWe note that this network architecture would introduce additional delays for deep networks and large tau compared to our proposed architecture (all unidirectional except for the last layer).\\nSince each lookahead layer needs to wait tau steps, we can only compute the first output after waiting tau + (tau-1)(depth-2) steps (instead of tau steps).\"}", "{\"title\": \"Simple and useful concept, clear writeup, limited experiments\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"A clear description of the so called \\\"convolutional lookahead\\\" for RNNs in order to incorporate small windows of future context information in a similar fashion to bidirectional RNN, but in a way amenable to streaming decode.\\n\\nThe primary drawback of this paper is the limited experimental section - it would have been great to see more comparison over various settings of `tau`, ideally showing convergence to the full bidirectional RNN solution with larger and larger settings - the authors mention other experiments (\\\"increasing future context size did not close the gap\\\", conclusion), but fail to show them here. One other experiment of interest would be to see the performance limitations of only using the convolutional lookahead, either by making the network use a single recurrent layer (bidirectional vs lookahead) in a fashion similar to Deep Speech 1, or making *all* layers use convolutional lookahead. Also showing the experiments in which \\\"using a regular\\nconvolution layer with multiple filters resulted in poor performance\\\" due to overfitting would be useful - perhaps the gap between them is due to capacity limitations in the lookahead?\\n\\nAdditionally, the paper mentions \\\"We note that much better performance can be\\nobtained for both datasets by using a more powerful language model or more training data. We have\\nobserved that in both cases the improvements from the lookahead convolution layer are consistent\\nwith the smaller scale experiments shown here.\\\" - it would be good to actually *see* these experiments in a table or description, rather than an offhand comment.\\n\\nMore experiments are always interesting, and actually showing the experiments mentioned in passing in the text would be even better, but the paper as it stands is already a \\\"minimum viable paper\\\" for workshop purposes. It clearly displays a particular technique, its uses, and some drawbacks and performance issues in application. Some of the mentioned experiments above are also described as \\\"future work\\\", so it is clear the authors know these are interesting directions of exploration, and ideally a subset of those results can make the workshop paper.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
jZ9WrEWPmsnlBG2XfGLl
Coverage-based Neural Machine Translation
[ "Zhaopeng Tu", "Zhengdong Lu", "Yang Liu", "Xiaohua Liu", "Hang Li" ]
Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and under-translation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both translation and alignment qualities over NMT without coverage.
[ "nmt", "neural machine translation", "coverage vector", "problems", "response", "problem", "track", "attention history" ]
https://openreview.net/pdf?id=jZ9WrEWPmsnlBG2XfGLl
https://openreview.net/forum?id=jZ9WrEWPmsnlBG2XfGLl
ICLR.cc/2016/workshop
2016
{ "note_id": [ "ROVpzJnpYivnM0J1Ipq5", "K1VMqRGvJu28XMlNCVoV" ], "note_type": [ "review", "review" ], "note_created": [ 1457651522538, 1458138602048 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/15/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/15/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"interesting ideas and results for neural MT, but very difficult to understand and follow. I suspect this is due to hasty and overly-aggressive compression of the original paper to this 3-page format.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper is about introducing a notion of (soft) source-side coverage into neural MT models. The idea makes sense and is shown to produce reasonable gains in BLEU.\\n\\nThis version of the paper is extremely difficult to understand. I had to google for the uncompressed arxiv version to get any kind of confidence that I understood the paper. I would recommend that people read the original paper -- it is quite interesting. I think the authors should have tried to actually write an extended abstract that conveys the key points, rather than trying to fit the entire formal description of their approach into the 3-page format.\", \"below_are_just_a_few_of_the_notational_issues_in_this_version\": \"$auxs$ --> maybe $\\\\psi$?\\n\\n\\\\alpha_{i,j} in Eq. (1) is not defined.\\n\\n\\\\phi(h_j) is used in the equation, but then \\\\phi_i(h_j) is defined immediately thereafter. Which should it be?\\n\\nWhat is the \\\"decoding state\\\" s_i? This is not defined in the paper.\\n\\nThe equation in Section 2.1 uses \\\\alpha_{i,j} but below that the authors write \\\"Here we only employ \\\\alpha_{i-1}...\\\" -- this seems to be a mismatch. Or if it's not a mismatch, I don't understand what it means.\\n\\n\\nThere is also no description of the experimental setup -- only some tables and plots are shown. I think the work is interesting and compelling but I am hesitant to recommend acceptance of this paper as an ICLR workshop paper. I would prefer that the authors submit this as a conference paper to another venue, like ACL, CoNLL, EMNLP, or COLING. This paper would be a good fit for one of these NLP conferences.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"In principle promising idea to add coverage information to an attention-based neural MT system, but paper is uncomprehensible as it is\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper extends attention-based neural machine translation (NNMT) with a coverage model. In the standard attention model, for each target word a subset of relevant source words are \\\"selected\\\" as context vector. In principle, it can happen that source words are used several times or not all. Therefore, the introduction of the notion of source word coverage in an NNMT attention model is an interesting idea (a coverage model is used in standard PB SMT).\\n\\nI don't agree with the statement that learning the coverage information by back-prop is potentially weak and one should add \\\"linguistic information\\\". In that case, one could question the whole idea to do NNMT - in such a model every \\\"decision\\\" is purely statistical without any linguistic information.\\n\\nThe description of the used coverage model itself is very complicated to understand. Given the space constraints, it seems a bad idea to first present a general model (Eqn 1) and than to use a much simpler one, which is insufficiently explained. I wasn't able to understand how the coverage model was calculated, how the fertility probabilities were obtained (Eqn 2+3), etc.\\n\\nFinally, the results are not analyzed - the authors just provide two figures and a table. There is no information on what data the system was trained on nor the actual language pair !! Also, I'm surprised that the BLEU score of the NNMT system decreases substantially with the length of the sentences (Figure 1 left). This is in contrast to results by Bahdanau et al. who show that the attention model does prevent this decrease (plot RNN search-50 in Figure 2 in their paper) ! This raises some doubts on the experimental results ...\\n\\nwhy the attention coverage vector beta is uniformly initialized ? I expect it to be zero (nothing covered)\\n - you use notation without defining it, e.g.\\n - what is d in \\\".. is a vector (d>1) ..\\\"\\n - s_i and h_j ; a small figure would be very helpful !\\n\\nSeveral sentences are difficult to understand and I spotted a couple of stupid errors (e.g. \\\"predefined constantto denoting\\\"). Please proof-read the paper !\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
xnrA4qzmPu1m7RyVi38Z
CMA-ES for Hyperparameter Optimization of Deep Neural Networks
[ "Ilya Loshchilov", "Frank Hutter" ]
Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization. As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy usage example using CMA-ES to tune hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
[ "hyperparameter optimization", "deep neural networks", "grid search", "random search", "bayesian optimization", "alternative", "performance", "optimization" ]
https://openreview.net/pdf?id=xnrA4qzmPu1m7RyVi38Z
https://openreview.net/forum?id=xnrA4qzmPu1m7RyVi38Z
ICLR.cc/2016/workshop
2016
{ "note_id": [ "ZYE6lW1Aki5Pk8ELfENW", "6XAk3KEykUrVp0EvsEBg", "MwVMBoZJzfqxwkg1t71j", "yovRVMVJMur682gwszPD", "gZWJBXDkPiAPowrRUAKL", "MwVMZzRBDSqxwkg1t71M", "vl62XZ46AH7OYLG5in8k", "p8jOo5YAKcnQVOGWfpk7", "NLokZ1m68u0VOPA8ixVA", "4Qyg1WpYnFBYD9yOFqjP" ], "note_type": [ "comment", "comment", "review", "comment", "comment", "review", "comment", "review", "comment", "comment" ], "note_created": [ 1458842164513, 1458164271298, 1457916673782, 1458164110273, 1458842730422, 1457880707142, 1458842074879, 1458251695755, 1458842754345, 1458163949599 ], "note_signatures": [ [ "~Ilya_Loshchilov1" ], [ "~Ilya_Loshchilov1" ], [ "ICLR.cc/2016/workshop/paper/126/reviewer/11" ], [ "~Ilya_Loshchilov1" ], [ "~Ilya_Loshchilov1" ], [ "ICLR.cc/2016/workshop/paper/126/reviewer/10" ], [ "~Ilya_Loshchilov1" ], [ "ICLR.cc/2016/workshop/paper/126/reviewer/12" ], [ "~Ilya_Loshchilov1" ], [ "~Ilya_Loshchilov1" ] ], "structured_content_str": [ "{\"title\": \"Dear Reviewer\", \"comment\": \"Dear Reviewer, just in case you missed our reply due to the lack of notifications in OpenReview, we addressed your questions and comments here: http://beta.openreview.net/forum?id=xnrA4qzmPu1m7RyVi38Z Thanks!\"}", "{\"title\": \"This is a reply to the reviews by the authors. Part 3/3.\", \"comment\": \"Reviewer 2:\\nThe suggestion of using priors over the search space within Bayesian optimization seems very sensible. Note that, Scalable Bayesian Optimization using Deep Networks does exactly this (using a prior mean function as a quadratic bowl centered in the middle of the space). That is in a way analogous to the setup for CMA-ES here (starting with a Gaussian spray of points centered in the middle of the space).\\n\\nThe initialization seems like a major possible source of bias. One might worry that the bounds are setup with the optimum near the center, which would favor the approach that starts with random points at the center. It would be useful to experimentally validate this by starting the Bayesian optimization approaches at the center as well.\", \"authors\": \"We agree that the role of priors is important (in fact, we emphasized that in our original text \\\"This might be because of a bias towards the middle of the range\\\"). We have also added results for TPE, with the same priors, and the priors certainly help. However, Figure 3 in the supplementary material clearly shows that the best solutions most of the time do not lie in the middle of the search range (see, e.g., $x_3, x_6, x_9, x_{12}, x_{13}, x_{18}$). \\n\\n\\nWe thank the reviewers for considering this very different approach for hyperparameter optimization. While we clearly do not believe it to be the answer to all problems, its strengths appear to nicely complement those of existing methods.\"}", "{\"title\": \"Potentially very useful algorithm proposed for hyperparameter tuning, although it has been proposed before. Promising results but requires more thorough experiments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary:\\nThis paper investigates the use of the CMA-ES algorithm for embarrassingly parallel hyperparameter optimization of continuous (or integer) hyperparameters in deep learning models, specifically convolutional networks. The experiments show that this method can potentially outperform GP-based hyperparameter optimization, although more experiments are needed to draw any solid conclusions. As one example, I think it would be worth investigating how well random search does on this problem as a baseline. For the smaller 5-minute problem at least, the methods should be run multiple times to get error bars.\\n\\nAs far as I know CMA-ES searches locally, which could be a cause for concern if the function is multi-modal. I think that running CMA-ES with a few different initial distributions would be helpful to show whether it is robust to this effect.\", \"novelty\": \"The idea of applying CMA-ES for hyperparameter optimization is not necessarily novel (CMA-ES was used to tune speech recognition models in [1], for example), but the idea is simple enough and potentially practical enough that it is worth investigating for deep learning. A reference to [1] should be added.\", \"clarity\": \"The paper is well written overall, but there is very little information on the CMA-ES algorithm used in the experiments. I recommend adding an algorithm box outlining the CMA-ES approach used in the paper. There are also some non-standard hyperparameters, such as the batch selection, that should be briefly explained.\\n\\nI\\u2019m not sure if the claim that there is no way to truly parallelize SMBO is true. For example, there is the q-EI acquisition function in [2].\\n\\nFrom the experiments, there is a table of transformations that were applied to the hyperparameters. Were these transformations used for all of the methods? Hopefully yes since that would otherwise have a drastic effect on the results.\\n\\nIt would also be really helpful to see what the best hyperparameters are, particularly if they are near the center of the search space.\", \"significance\": \"The nice thing about this paper is that it could result in a very simple and practical methodology. At the moment there are still several open questions, but if the conclusions hold up to more intense scrutiny then it could be very significant.\", \"quality\": \"This paper is a straightforward application of a simple algorithm to a difficult problem and is of sufficient quality for a workshop paper.\", \"pros\": [\"Simple application of a well known algorithm to a practical problem\", \"Results show a lot of promise and merit further investigation\"], \"cons\": [\"Needs more experiments before any conclusions can be drawn\", \"The paper is light on details of the design choices in the experiments\", \"CMA-ES should be more thoroughly described\"], \"references\": \"[1] Watanabe, S. and Le Roux, J. Black box optimization for automatic speech recognition. MITSUBISHI ELECTRIC RESEARCH LABORATORIES TR2014-021, May 2014\\n\\n[2] M. Schonlau. Computer Experiments and global optimization. PhD Thesis. University of Waterloo, 1997\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This is a reply to the reviews by the authors. Part 2/3.\", \"comment\": \"Reviewer 1: It would also be really helpful to see what the best hyperparameters are, particularly if they are near the center of the search space.\", \"authors\": \"We found that divergence of the network is very rare, so no special measures were taken. The detailed distribution of evaluation qualities is given in Figure 4 of the supplementary material.\", \"reviewer_2\": \"In particular, the comparison is likely conflated by discontinuities in the optimization surface. It seems reasonable to compare to approaches that take this into account and for which implementations are provided in the same package as the authors ran (e.g. PESC).\\nOne concern is discontinuities in the objective function, which could be caused by having the neural net being trained diverge. Looking at the hyperparameter bounds, it seems reasonable to expect this to happen (e.g. high momentum and high learning rate). Various papers (Gelbart et al., Gardner et al., PESC, Snoek, Gramacy) developed constraints to deal with this issue. Did the model diverge during training and if so, did you consider using the constrained alternatives?\"}", "{\"title\": \"Response 1/2\", \"comment\": \"Thanks for your comments and suggestions!\\nIn the remaining responses, we'll refer to an updated version of the paper available at\", \"https\": \"//sites.google.com/site/cmaesfordnn/iclr2016___hyperparameters.pdf?attredirects=0&d=1 (anonymously for the visitors)\", \"reviewer\": \"Apart from a more comprehensive experiment coverage, all existing experiments require multiple evaluations and corresponding error bars.\", \"authors\": \"We agree that it is useful to run algorithms several times to get error bars. We\\u2019ll do that in the future. For now, we used our small computational budget to also study multiple problems, running CMA-ES on multiple different problems (see Figure 1 top) to quantify its variation across problems as well. One can see that the results for Adam and Adadelta are quite similar when the same budgets are used. This is also the case for the runs with different time budgets.\"}", "{\"title\": \"Comparing CMA-ES to Bayesian Optimization is a really great idea, but this needs more careful empirical work to be valuable to the community.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper explores the use of an algorithm from the evolutionary optimization literature as an alternative approach to Bayesian optimization for hyperparameters. In particular, the authors propose the use of CMA-ES for the parallelized hyperparameter optimization of a deep neural network. On one problem, they demonstrate that CMA-ES appears to reach better validation performance than a popular Bayesian optimization method.\\n\\nThis is a well written paper that is easy to follow and offers an interesting datapoint for Bayesian optimization and hyperparameter optimization researchers. One concern, however, is that the empirical evaluation is too light. The authors run a single optimization on just one problem and the experimental setup may have some issues. In particular, the comparison is likely conflated by discontinuities in the optimization surface. It seems reasonable to compare to approaches that take this into account and for which implementations are provided in the same package as the authors ran (e.g. PESC). Also, the reported results on the CIFAR-10 validation set seem too good to be true, which makes one worry about the experimental setup.\\n\\nIn Figure 1, it looks like the GP-based approaches (EI and PES) experience major model fitting issues. This would be suggested by the observation that they don't seem to improve at all after the first few function evaluations. One concern is discontinuities in the objective function, which could be caused by having the neural net being trained diverge. Looking at the hyperparameter bounds, it seems reasonable to expect this to happen (e.g. high momentum and high learning rate). Various papers (Gelbart et al., Gardner et al., PESC, Snoek, Gramacy) developed constraints to deal with this issue. Did the model diverge during training and if so, did you consider using the constrained alternatives?\\n\\nThe CMA curve never seems to sample close to the optimum (i.e. the best values are always extreme outliers). That seems strange. Has it just not converged to the optimum?\\n\\nValidation errors below 0.3% sounds extremely low for CIFAR-10. Typical values currently reported (i.e. state-of-the-art) are around 6% to 8% depending on the type of data augmentation performed.\\n\\nThe suggestion of using priors over the search space within Bayesian optimization seems very sensible. Note that, Scalable Bayesian Optimization using Deep Networks does exactly this (using a prior mean function as a quadratic bowl centered in the middle of the space). That is in a way analagous to the setup for CMA-ES here (starting with a Gaussian spray of points centered in the middle of the space).\\n\\nThe initialization seems like a major possible source of bias. One might worry that the bounds are setup with the optimum near the center, which would favor the approach that starts with random points at the center. It would be useful to experimentally validate this by starting the Bayesian optimization approaches at the center as well.\\n\\nWow, the bounds for selection pressure seem very broad... Does this hyperparameter really vary the objective in an interesting way over a range of 100 orders of magnitude? One might imagine that this could really confound model-based optimization approaches, unless the objective varies smoothly accross this space.\\n\\nIn the introduction, I don't think 'perfect parallelization' seems like a fair statement at all. Random search and grid search offer 'perfect parallelization' but that doesn't not imply that these are better approaches. I highly doubt that CMA-ES uses the parallel experiments more efficiently than other approaches. In fact, one might view (as I do) that the need to distribute a random sample of points in CMA-ES is a major disadvantage. It *has* to parallelize, which seems terribly inefficient.\\n\\nOverall, the idea of comparing to CMA-ES seems like a really great idea, since it is the champion algorithm from the evolutionary optimization field. I think this is a good start, but I am concerned as an empirical study it needs more rigor before it should be accepted. Perhaps the authors can address the above concerns in their next manuscript.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Dear Reviewer\", \"comment\": \"Dear Reviewer, just in case you missed our reply due to the lack of notifications in OpenReview, we addressed your questions and comments here: http://beta.openreview.net/forum?id=xnrA4qzmPu1m7RyVi38Z Thanks!\"}", "{\"title\": \"An very interesting idea worth exploring further. However, additional experiments are required to evaluate its empirical performance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes using CMA-ES for hyperparameter optimization. An advantage of employing this model is its clear parallelism strategy, which is difficult to achieve in existing approaches. This is an interesting direction of research, as it introduces a new alternative to popular hyperparameter tuning techniques. I am not familiar with CMA-ES, so cannot comment on the novelty of this idea. However, additional experiments are required to validate its empirical success \\u2014 especially once this contribution evolves into a conference track submission.\\n\\nOne aspect that is not clear to me is the tradeoff between \\\"perfect parallelism\\\" and observation efficiency. That is, random search also features perfect parallelism, but past observations don't meaningfully inform future evaluations. \\n\\nCMA-ES is claimed to perform well for larger function budgets, but this seems to be in contrast to the usual (and necessary) assumption of expensive function evaluations. The experiments presented report results for evaluation times of 5-30 minutes, but this is one to two orders of magnitude less than realistic neural network training times.\\n\\nApart from a more comprehensive experiment coverage, all existing experiments require multiple evaluations and corresponding error bars. For example, the experiment on the bottom right seems misleading. The first evaluation by CMA-ES reports a lower error than other approaches are able to ever attain, or attain just prior to convergence. However, this evaluation was done completely at random, so it is not indicative of the performance of this method in general, just of the initialization strategy.\\n\\nIn addition, while we expect CMA-ES to perform well for a large number of observations, GP-based approaches cannot scale to this regime. As such, this approach must be compared to an appropriate baseline (for example, Snoek 2015).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response 2/2\", \"comment\": \"Reviewer: For example, the experiment on the bottom right seems misleading. The first evaluation by CMA-ES reports a lower error than other approaches are able to ever attain, or attain just prior to convergence. However, this evaluation was done completely at random, so it is not indicative of the performance of this method in general, just of the initialization strategy.\", \"authors\": \"The updated version of the paper also involves two variants of TPE and SMAC. We agree that the work of Snoek 2015 (please note that we mention it in our paper) should also be considered. However, our workshop paper does not attempt to make a final conclusion but rather introduces a new tool to the field.\", \"reviewer\": \"In addition, while we expect CMA-ES to perform well for a large number of observations, GP-based approaches cannot scale to this regime. As such, this approach must be compared to an appropriate baseline (for example, Snoek 2015).\"}", "{\"title\": \"This is a reply to the reviews by the authors. Part 1/3.\", \"comment\": \"This is a reply to the reviews by the authors.\\nThanks for the reviews! We first point out 2 misunderstandings and then reply to the reviewers\\u2019 questions. \\n\\nReviewer 2 got confused, saying that the reported results of 0.3% on the CIFAR-10 validation set seem too good to be true (state of the art is 6% to 8%), which makes one worry about the experimental setup. \\nThat would be true, except that we never mentioned CIFAR-10 in the paper; everything is on MNIST, for which 0.3% is exactly the performance we should be getting.\\nReviewer 2 said CMA-ES \\u201chas to parallelize\\u201d, which is not true: of course, one can do function evaluations sequentially. Indeed, the bottom left figure is for the sequential setting.\\n\\nIn the remaining responses, we\\u2019ll refer to an updated version of the paper available at\", \"https\": \"//sites.google.com/site/cmaesfordnn/iclr2016___hyperparameters.pdf?attredirects=0&d=1 (anonymously for the visitors).\", \"the_webpage_with_this_pdf_is_https\": \"//sites.google.com/site/cmaesfordnn/\\n\\nResponse to Reviewer 1\", \"reviewer_1\": \"From the experiments, there is a table of transformations that were applied to the hyperparameters. Were these transformations used for all of the methods? Hopefully yes since that would otherwise have a drastic effect on the results.\", \"authors\": \"Yes, of course. All algorithms were provided with the same information and all search in [0,1]^19. (We agree that anything else would lead to completely misleading results.)\"}" ] }
0YrnoNZ7PTGJ7gK5tNYY
VARIATIONAL STOCHASTIC GRADIENT DESCENT
[ "Michael Tetelman" ]
In Bayesian approach to probabilistic modeling of data we select a model for probabilities of data that depends on a continuous vector of parameters. For a given data set Bayesian theorem gives a probability distribution of the model parameters. Then the inference of outcomes and probabilities of new data could be found by averaging over the parameter distribution of the model, which is an intractable problem. In this paper we propose to use Variational Bayes (VB) to estimate Gaussian posterior of model parameters for a given Gaussian prior and Bayesian updates in a form that resembles SGD rules. It is shown that with incremental updates of posteriors for a selected sequence of data points and a given number of iterations the variational approximations are defined by a trajectory in space of Gaussian parameters, which depends on a starting point defined by priors of the parameter distribution, which are true hyper-parameters. The same priors are providing a weight decay or L2 regularization for the training. Then a selection of L2 regularization parameters and a number of iterations is completely defining a learning rule for VB SGD optimization, unlike other methods with momentum (Duchi et al., 2011; Kingma & Ba, 2014; Zeiler, 2012) that need selecting learning, regularization rates, etc., separately. We consider application of VB SGD for important practical case of fast training neural networks with very large data. While the speedup is achieved by partitioning data and training in parallel the resulting set of solutions obtained with VB SGD forms a Gaussian mixture. By applying VB SGD optimization to the Gaussian mixture we can merge multiple neural networks of same dimensions into a new single neural network that has almost the same performance as an original Gaussian mixture.
[ "data", "model", "probabilities", "model parameters", "parameter distribution", "number", "iterations", "priors", "training", "vb sgd optimization" ]
https://openreview.net/pdf?id=0YrnoNZ7PTGJ7gK5tNYY
https://openreview.net/forum?id=0YrnoNZ7PTGJ7gK5tNYY
ICLR.cc/2016/workshop
2016
{ "note_id": [ "OM0mBBk7Gcp57ZJjtNPw", "lx9ZgxVyvt2OVPy8Cvq7", "oVg3PAMLzcrlgPMRsB6n" ], "note_type": [ "review", "review", "official_review" ], "note_created": [ 1457640787797, 1457616172597, 1457609226600 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/23/reviewer/10" ], [ "~Tapani_Raiko1" ], [ "~Jose_Miguel_Hernandez_Lobato1" ] ], "structured_content_str": [ "{\"title\": \"Below the bar\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper proposes a method for online updates of a variational approximation of the posterior over neural network weights. No experimental evaluation is provided. The presentation is intelligible, but far from clear.\\n\\nThe idea of using a recursive variational Bayes approximation for streaming data was proposed in Broderick et al.'s SDA-Bayes paper (http://papers.nips.cc/paper/4980-streaming-variational-bayes). But as another reviewer noted, online variational inference has been around since at least Sato's 2001 paper on online model selection with variational Bayes, and in a sense since the 1998 Neal and Hinton paper on incremental EM.\\n\\nThere have been plenty of papers about variational inference for neural networks, for example, Graves's Practical Inference for Neural Networks (2011) or Hinton's original 1993 variational inference/MDL paper (http://dl.acm.org/citation.cfm?id=168306).\\n\\nThe idea of using the variational distribution's variance to control step size is interesting. It's sort of related to recent papers that use trust regions/prox algorithms to optimize variational approximations (Theis&Hoffman, 2015; Khan et al., 2015).\\n\\nHowever, that doesn't mean it will work. With no experimental validation, it's impossible to say whether this is anything more than a cute idea.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Not mature enough even for a workshop presentation\", \"rating\": \"3: Clear rejection\", \"review\": \"Manuscript describes variational Bayesian (VB) treatment of weights in neural networks and online learning for them.\\n\\nSimilar ideas have been studied recently, for instance in\", \"http\": \"//jmlr.org/proceedings/papers/v37/blundell15.pdf\\nbut relationship to existing work is not presented clearly. Instead, using VB for network weights is presented as something novel.\\n\\nThere is no clear theoretical contribution or any experiments.\", \"there_is_one_crucial_error_as_well\": \"Bottom of page 3 writes that \\\"...distribution of the whole ensemble is a mix of...\\\" whereas the equation on the top of page 3 is a product rather than a mixture.\\n\\nThis paper might be of interest to the author, covering similar ideas:\", \"https\": \"//www.hiit.fi/u/ahonkela/papers/ica2003.pdf\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review for VARIATIONAL STOCHASTIC GRADIENT DESCENT\", \"rating\": \"3: Clear rejection\", \"review\": \"The authors propose an approach for the on-line maximization of the variational lower bound. The new method is based on iterating over the data and solving individual optimization problems between the current posterior approximation and the product of that posterior approximation and the likelihood function for the current data point. The advantages of the proposed approach with respect to other variational techniques is that it does not require to use learning rates or compute complicated expectations with respect to the variational approximation.\", \"quality\": \"The proposed approach is not validated in any form of experiments. It is not clear how well it is going to work since variational Bayes is known to under-estimate variance and its application in an on-line manner could make more significant this problem because of consecutive under-estimation of variances at each iteration. Another problem is that there is no guarantee that the proposed approach is going to converge to any local minimizer of the original variational bound. In fact, by looking at equation 5, the update for the variance produces increasingly small variances. This means that the proposed approach would converge to a point mass at the mean of the posterior approximation q.\\n\\nThe mixture of Gaussians in Section 3 does not seem to be the correct approach. The correct approach would be to compute the product of all these Gaussians to obtain a final Gaussian approximation (accounting for the prior being repeated multiple times). The correct approach is given in\\n\\nExpectation propagation as a way of life\\nAndrew Gelman, Aki Vehtari, Pasi Jyl\\u00e4nki, Christian Robert, Nicolas Chopin, John P. Cunningham\", \"http\": \"//arxiv.org/abs/1412.4869\", \"clarity\": \"The work needs to be improved for clarity. It is not clear how equation 4 is obtained. The equation above equation 4 seems to come from performing a Laplace approximation. The authors should clarify this possible connexion with the Laplace approximation.\", \"originality\": \"The approach proposed seems to be original up to my knowledge.\", \"significance\": \"It is not clear how significant the proposed method is since one can use stochastic optimization to optimize the variational lower bound. The approach for training neural networks fast by splitting the data seems to be wrong.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
oVgo1Xo3KTrlgPMRsBVZ
Manifold traversal using density ridges
[ "Jonas Nordhaug Myhre", "Michael Kampffmeyer", "Robert Jenssen" ]
In this work we present two examples of how a manifold learning model can represent the complexity of shape variation in images. Manifold learning techniques for image manifolds can be used to model data in sparse manifold regions. Additionally, they can be used as generative models as they can often better represent or learn structure in the data. We propose a method of estimating the underlying manifold using the ridges of a kernel density estimate as well as tangent space operations that allows interpolation between images along the manifold and offers a novel approach to analyzing the image manifold.
[ "manifold", "density ridges", "images", "data", "manifold traversal", "traversal", "work", "present", "examples", "model" ]
https://openreview.net/pdf?id=oVgo1Xo3KTrlgPMRsBVZ
https://openreview.net/forum?id=oVgo1Xo3KTrlgPMRsBVZ
ICLR.cc/2016/workshop
2016
{ "note_id": [ "91ExXVyyBtkRlNvXUVZ1", "r8l3KljEJc8wknpYt543" ], "note_type": [ "review", "review" ], "note_created": [ 1458248580292, 1456695880726 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/192/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/192/reviewer/11" ] ], "structured_content_str": [ "{\"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes to perform image synthesis or reconstruction with help of a manifold that capture the image shape variations. A manifold is estimated from the data, and then synthesis is performed. Manifold models are reasonable in some image reconstruction problems, and often provide elegant solutions.\\n\\nThe ideas and results in this short paper are correct. However, the paper does not present any novelty unfortunately: the problem, the framework are not new. And the tools used for manifold learning, or for image reconstruction are classical too. \\n\\nDue to the very limited novelty, this paper is unfortunately below the threshold of acceptance for ICLR.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review: Manifold Traversal using Density Ridges\", \"rating\": \"3: Clear rejection\", \"review\": \"This is a short paper that studies density ridges in manifolds using the framework proposed by Ozertem and Erdogmus: it preprocesses the data using PCA, identifies density ridges by following the principal eigenvector of the local manifold Hessian, and projects the data onto the ridge estimates using an approach proposed in prior work by Dollar et al. Embeddings and interpolation results are shown on the MNIST and Frey faces dataset.\\n\\nAlthough I may have misunderstood parts of the proposed approach due to the brevity of the submission (even in the short workshop format, I believe it possible to provide a bit more details), the novelty of the paper appears limited: it is a straightforward combination of prior work by Ozertem and Erdogmus and by Dollar et al. The paper presents no comparisons with prior work (neither experimental nor conceptual), which makes it difficult to gauge the contribution of the paper. In particular, it remains unclear what the goal of this line of work is. Is it to learn better feature representations from data? In that case, the study should present experiments aimed at evaluating the quality of the learned representation; the visualizations in Figure 1a, 2a, and 3a do not achieve this goal (in particular, since it is known that non-parametric techniques such as t-SNE can produce scatter-plot visualizations of much higher quality than PCA). Or is it to learn better models for image generation / interpolation? In that case, the study should develop methods to evaluate the quality of generated images, and perform comparisons with techniques that try to achieve the same (GPLVMs, mixtures of Bernoulli models, fields of experts, generative-adversarial networks, etc.).\\n\\nOverall, I believe this paper is of insufficient novelty and quality to be accepted at ICLR.\", \"minor_comment\": \"\\\"As long as the embedding space is of higher dimension than the manifold a linear method causes no harm.\\\" -> If by the dimensionality of the manifold the authors mean its intrinsic dimensionality, then this statement is incorrect. For instance, consider a manifold that is one-dimensional space-filling curve living in a 10-dimensional space. The dimensionality of the manifold is one, but a linear method needs to preserve all 10 dimensions in the data to prevent distant parts of the manifold from collapsing.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
p8jp5lzPWSnQVOGWfpDD
On-the-fly Network Pruning for Object Detection
[ "Marc Masana", "Joost van de Weijer", "Andrew D. Bagdanov" ]
Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.
[ "network", "image", "units", "object detection", "deep neural networks", "candidate bounding boxes", "deep neural network", "boxes", "feature occurrence" ]
https://openreview.net/pdf?id=p8jp5lzPWSnQVOGWfpDD
https://openreview.net/forum?id=p8jp5lzPWSnQVOGWfpDD
ICLR.cc/2016/workshop
2016
{ "note_id": [ "vlpGAEnXZi7OYLG5inzJ", "L7VQ0qN3niRNGwArs4go", "ZY9jnY4GWu5Pk8ELfEKQ", "k80JOkDAKsOYKX7ji4NV" ], "note_type": [ "review", "review", "official_review", "comment" ], "note_created": [ 1457161668257, 1457746036006, 1456956474346, 1458232806512 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/132/reviewer/10" ], [ "~Jeff_Donahue1" ], [ "~Christian_Szegedy1" ], [ "~Marc_Masana_Castrillo1" ] ], "structured_content_str": [ "{\"title\": \"Review_10: On-the-fly Network Pruning for Object Detection\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper presents methods to reduce the number of parameters of network for proposal based object detector (e.g., R-CNN), which can potentially accelerate the inference. The proposed method prune the network based on the network activation of each image, and then a smaller network can be applied to all different object proposals in an image. It is based on the assumption that network units with zero activation on the whole image cannot have nonzero activation on any object proposal in the image. Backward and forward pruning methods are proposed to prune the unit with zero or near zero activation. Experiments are done on the PASCAL 2007 to show that the pruning does not degrade the performance significantly.\", \"pros\": [\"Proposed methods are simple and well described.\"], \"cons\": [\"The key assumption does not have theoretical proof, or experimental support.\", \"There is no baseline comparisons in the experiments. It is not clear if a random pruning will be as effective as the proposed methods.\", \"The proposed methods are designed for detectors that evaluates each proposals independently, but it is based on the fast R-CNN, which obsoletes this routine (see the RoI pooling layer). The computation of all the convolutional layers are shared in fast R-CNN. This makes it less interesting to apply the proposed methods on the convolutional layers.\", \"Actually, only the experiments on the full connected layers are shown.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review of On-the-fly Network Pruning for Object Detection\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper presents two methods of reducing the number of parameters in a ReLU-based convnet based on pruning weights that result in a high proportion of inactive (activation 0) units.\", \"pros\": \"-The method is simple, well-motivated, and well-described\\n-Computation is reduced significantly while sacrificing little to no accuracy\\n-Method is applicable to any convnet with relu activations, and could be trivially generalized from fully-connected to convolutional layers\", \"cons\": \"The experiments are somewhat limited in that the pruning trick is evaluated on just two layers of one network for one problem, and the more recent detection approaches (Fast(er) R-CNN) do not have the same degree of issues with evaluating many proposals that R-CNN did, due to the ROI Pooling layer (first proposed in SPP)\\n\\nThough the evaluation is limited and addresses a problem that isn't as big of an issue now as it once was, the method is general enough to be worth readers' time for the short paper. Furthermore, my expectations for evaluation aren't as high for a workshop paper than in other venues, so I don't see the evaluation as being too much of a drawback for this work\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review of On-the-fly Network Pruning for Object Detection\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This extended abstract proposes pruning methodology for the weights of an already trained deep neural network for object detection. This particular method applies to R-CNN style object detection approach, where the same network is applied to a lot of proposals. The paper hypothesizes that if the post-classifier network yields zero values for some activations on the whole image, then the same unit will never give non-zero values when network is applied on any of the proposals. This suggests a recursive algorithm to prune the network weight matrices based on the activations of the network on the whole image. The paper presents two pruning strategies, the first one guarantees equivalent activation output on the whole image. The second one is an approximate version that might change the output of the network. The various pruning methodologies are then evaluated on the VOC detection benchmark and demonstrated to be able to prune up to 60% of the weight matrices without effecting the overall quality of the output on the proposals significantly.\", \"the_positive\": [\"The idea is sound and is relatively easy to implement for the R-CNN setup.\"], \"the_negative\": [\"The idea is based on an assumption that is not justified theoretically. The practical evidence for the activation is not presented in the abstract, but assumed silently.\", \"The traditional R-CNN method performs poorly already on the small objects. The expected failure mode of this method is also on the small objects, so the comparison graphs do not have the potenetial to measure this failure mode easily.\", \"The traditional R-CNN method of applying the post-classifier in separation has been obsoleted by applying SPP in the Faster R-CNN setup. The gains theoretically achievable by this algorithm are not very relevant in the big picture since SPP pools features from the globally applied network activations anyways.\", \"The idea is very specific to a special type of (already obsolete) detection procedure and is not likely to generalize to settings other than this.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to reviews\", \"comment\": \"Thanks everyone for your reviews.\\n\\nThe reviewers are right to point out that much of the computation is shared in the recent Faster-RCNN proposal (our detector follows exactly this Fast RCNN architecture). However, the fully connected layers (fc6, fc7 and fc8) must still be evaluated for all bounding box proposals, and these are the layers (fc6 and fc8) for which we show results. Even if their computational load in the original network is less than that of the convolutional layers, the fact that their evaluation must be repeated for each bounding box makes reduction of their computation very relevant. For small problems, fc8 reduction is irrelevant, but becomes relevant for problems with very many class problems.\\n\\nAlso in more modern architectures like Deep Residual Learning (He et al. arXiv 2015) where the fully connected layers are replaced by convolutional layers, the idea of our proposal could be applied. In the Deep Residual Network paper the first 91 convolutional layers are shared, but for every bounding box proposal the 9 remaining fully convolutional layers (conv5-) are computed. On these layers a pruning technique similar to the one we propose could be applied to prune filters resulting in feature maps with insignificant response (near zero).\"}" ] }
r8lrv9B0zu8wknpYt57Y
Data Cleaning by Deep Dictionary Learning
[ "Zhongqi Lu", "Qiang Yang" ]
The soundness of training data is important to the performance of a learning model. However in recommender systems, the training data are usually noisy, because of the randomness nature of users' behaviors and the sparseness of the users' feedback towards the recommendations. In this work, we would like to propose a noise elimination model to preprocess the training data in recommender systems. We define the noise as the abnormal patterns in the users' feedback. The proposed deep dictionary learning model tries to find the common patterns through dictionary learning. We define a dictionary through the output layer of a stacked autoencoder, so that the dictionary is represented by a deep structure and the noise in the dictionary is further filtered-out.
[ "training data", "users", "dictionary", "deep dictionary", "recommender systems", "feedback", "noise", "data cleaning", "soundness" ]
https://openreview.net/pdf?id=r8lrv9B0zu8wknpYt57Y
https://openreview.net/forum?id=r8lrv9B0zu8wknpYt57Y
ICLR.cc/2016/workshop
2016
{ "note_id": [ "E8VzjZ9o9S31v0m2iDp3", "mO9D7v9wwij1gPZ3UlB6", "5QzB3xwlnHZgXpo7i323" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457485384304, 1457405180443, 1457576129549 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/85/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/85/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/85/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Applying existing dictionary learning method to a new problem\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposed to use dictionary learning on recommender systems.\\nThe approach is based on an existing approach on alternating minimization. \\n\\nSince there is no experiment results demonstrating the effectiveness of the approach, it is hard to evaluate the effectiveness of the proposed algorithm.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper proposed an denoising auto-encoder like dictionary learning method for denoising recommender system data. Application is new, but no experiment results are given.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Summary:\\nThis paper proposed an denoising auto-encoder like dictionary learning method for denoising recommender system data.\", \"novelty\": \"Instead of using a feed forward network like in autoencoder, the proposed method uses dictionary learning based on optimization. Similar optimization techniques have been used for representation learning, but not for data denoising.\", \"concern\": \"No quantitative results and comparison.\\nGiven the similarity, it is interesting to see how well denoising autoencoder does for this task.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The authors proposed a deep dictionary learning model applied on recommendation probelm.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The mainstream of recommendation algorithms tried to find an implicit and linear representation for the ratings. This work suggest that a deep, non-linear representation could handle the noise properly.\", \"cons\": \"There are few discussions of intuitions of the reason of introducing neural networks and no any empirical experiments.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BNYAGZZj5S7PwR1riXzA
Deep Directed Generative Models with Energy-Based Probability Estimation
[ "Taesup Kim", "Yoshua Bengio" ]
Energy-based probabilistic models have been confronted with intractable computations during the learning that requires to have appropriate samples drawn from the estimated probability distribution. It can be approximately achieved by a Monte Carlo Markov Chain sampling process, but still has mixing problems especially with deep models that slow the learning. We introduce an auxiliary deep model that deterministically generates samples based on the estimated distribution, and this makes the learning easier without any high cost sampling process. As a result, we propose a new framework to train the energy-based probabilistic models with two separate deep feed-forward models. The one is only to estimate the energy function, and the another is to deterministically generate samples based on it. Consequently, we can estimate the probability distribution and its corresponding deterministic generator with deep models.
[ "generative models", "probability estimation", "probabilistic models", "learning", "probability distribution", "process", "deep models", "samples", "intractable computations", "appropriate samples" ]
https://openreview.net/pdf?id=BNYAGZZj5S7PwR1riXzA
https://openreview.net/forum?id=BNYAGZZj5S7PwR1riXzA
ICLR.cc/2016/workshop
2016
{ "note_id": [ "zvw1LYZyAfM8kw3Zinwy", "4QyAOVzxDuBYD9yOFqVD", "yovqvkRyYSr682gwszRX" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457830694779, 1457646897762, 1457620900200 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/102/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/102/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/102/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"This paper proposes interesting idea and tries to address a very hard problem for learning undirected graphical models in general, there are some issues with the technical aspect of the paper that needs to be addressed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes interesting idea and tries to address a very hard problem common to all energy based or undirected graphical models.\", \"there_are_two_technical_issues_to_be_clarified\": \"For Equation 4, wouldn't you need to multiply the det of the jacobian of x=G(z) : |det dz/dx| to be multiplied for every sample z that is drawn? How much more complexity would you have to incur if you had to backprop through this network every time you wanted calculate negative phase expectations?\\n\\nAnother issue is that if you are going to train the second network (similar in spirit to like a nonlinear sigmoid belief network) on the samples of the original energy model, why not just keep those samples around like the methods of persistent contrastive divergence or fast persistent contrastive divergence? If you keep the samples, that would be sort of like learning a non-parametric model of the negatives phase samples.\\n\\nI think the bigger issues is that the idea is to try to use a directed uniform to multimodal generative model to model the samples from the partition function of the energy model. The idea is that if the directed model can learn a good multimodal representation, then the negative phase would be easy to \\\"mix\\\". However, if you could learn a good directed model, why not just learn that on the original training data instead!?\\n\\nPerhaps to improve the motivation for the model, one can argue that the product of experts being learned is more interesting or more important than a simple directed uniform-multimodal generative model?!\\n\\nFor related works, there are long history of approximate inference models for addressing the negative phase sampling by learning the posterior, e.g. Wake-Sleep algorithm, and Efficient learning of DBMs. The authors could reference them and draw comparisons to those prior works.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes a new way of training energy-based probabilistic models using two separate deep networks. The first one estimates the energy function (deep energy model), while the goal of the second one (deep directed generative model) is to generate samples that would approximate samples from the deep energy model.\\n\\nThe authors are using an adversarial setting of Goodfellow et.al. to train both models: parameters of deep energy model are trained to discriminate between the real and generated data, whereas parameters of the generative model are trained to align the probability distribution p(x) between the deep energy model and the directed generative model.\\n\\nOne key assumption of the algorithm is that the distributions of the deep energy model and the directed generative model need to be approximately aligned during training. Simple 2-d simulation results show that this is indeed the case but it is not clear at all whether this would hold when modeling complex high-dimensional distributions: it is essentially saying that\\na simple deep directed generative model can accurately approximate the distribution of the complex deep energy-based model.\\n\\nI would also encourage the authors to clean up writing/English, as well as weed out various typos in equations. For example, the authors are sometimes using \\\\theta and sometimes using \\\\Theta to mean the same thing.\", \"there_is_also_a_typo_in_eq_2\": \"sum_i E_{\\\\theta_I}.\\n\\nI would also spend more time discussing the actual training of the model (Eq. 4), while reducing justification of why training deep generative models is hard (text around Eq. 1).\\n\\nFinally, it was not clear to me what is the final output of the training procedure. Do we throw away deep energy model and stick with deep directed generative model, or the other way around? Under the assumption that both models are approximately aligned, then they are essentially modelling the same distribution.\\n\\nI would also encourage the authors to compare to contrastive backprop for training their deep energy model. I suspect it might work much better compared to what the authors are proposing.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Deep Directed Generative Models With Energy-Based Probability Estimation\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper aims to address the problem of learning energy-based density models that have intractable partition functions. A core idea of the proposed approach is to use a *separate model* (Model B) to generate equilibrium samples from the energy-based model we wish to learn (Model A). If an oracle could provide a perfectly matched Model B for each given set of parameters for Model A, then learning via Monte Carlo estimates of the gradients would be straightforward.\\n\\nThis idea of having paired models is similar in spirit to well known previous approaches for learning generative models and density estimation (e.g. Helmholtz machines, Generative Adversarial Nets as noted), but it differs in several ways and there could be interesting ideas to explore in this direction. However, I have some concerns about the current proposal.\\n\\nIn particular, simply minimizing the energy of the generated samples (eqn 5) will not necessarily result in a generator (Model B) that follows and matches the equilibrium distribution of the target energy-based model (Model A). Indeed, the generator could obtain a (local) optimum by being degenerate -- with all the samples at the minima of one of the energy modes of Model A. It could also easily completely ignore some modes, or assign them incorrect probability mass, etc. (The toy examples shown do not seem to exhibit this pathology too badly, however it's not clear whether this is due to fortunate initialization and/or the very low dimensional nature of the problem.) In some respects this is analogous to the MCMC mixing failures that the proposed technique aims to circumvent.\\n\\nI think the general idea presented here may be worth exploring and expanding on further, but in the current form it doesn't seem ready for presentation at ICLR.\", \"as_additional_points\": \"the clarity of the writing could be substantially improved, and a more challenging but still tractable toy-task (e.g. MNIST) would help to understand whether the potential problems noted with this method arise in more practical situations. On the algorithm side, one can imagine several possible heuristics to test for and guard against the failure modes suggest above.\", \"pro_acceptance\": \"Interesting direction for ideas tackling a broad/significant problem.\", \"con_acceptance\": \"Insufficient evidence that proposed method works well on \\\"realistic\\\" problems, combined with a mathematically identifiable weakness whose presence is not discussed and has not been properly explored empirically.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
Qn8lE8x17fkB2l8pUYPk
GradNets: Dynamic Interpolation Between Neural Architectures
[ "Diogo Almeida", "Nate Sauder" ]
In machine learning, there is a fundamental trade-off between ease of optimization and expressive power. Neural Networks, in particular, have enormous expressive power and yet are notoriously challenging to train. The nature of that optimization challenge changes over the course of learning. Traditionally in deep learning, one makes a static trade-off between the needs of early and late optimization. In this paper, we investigate a novel framework, GradNets, for dynamically adapting architectures during training to get the benefits of both. For example, we can gradually transition from linear to non-linear networks, deterministic to stochastic computation, shallow to deep architectures, or even simple downsampling to fully differentiable attention mechanisms. Benefits include increased accuracy, easier convergence with more complex architectures, solutions to test-time execution of batch normalization, and the ability to train networks of up to 200 layers.
[ "gradnets", "dynamic interpolation", "architectures", "benefits", "networks", "neural architectures gradnets", "neural architectures", "machine learning", "fundamental", "ease" ]
https://openreview.net/pdf?id=Qn8lE8x17fkB2l8pUYPk
https://openreview.net/forum?id=Qn8lE8x17fkB2l8pUYPk
ICLR.cc/2016/workshop
2016
{ "note_id": [ "4QyA39PRPhBYD9yOFqWy", "ZY9l6rZ2Qh5Pk8ELfEXL" ], "note_type": [ "review", "review" ], "note_created": [ 1457616518142, 1458194214299 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/134/reviewer/12" ], [ "~Yangqing_Jia1" ] ], "structured_content_str": [ "{\"title\": \"Blending two architectures results in an unknown closed-form loss.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose to blend any two architectural components as the time of optimisation progresses. As the time progresses, the initial approach, e.g. employed rectifier, is gradually switched off in place of another rectifier. The authors claim that this strategy is good for a fast convergence and they present some experimental results.\", \"pros\": [\"recognition of the convergence problem e.g. with drop-out\", \"idea of evolving objective\"], \"cons\": [\"as the network switches between two approaches, it is unclear what is the closed form loss that the network optimizes\", \"not clear what are theoretical guarantees of such optimization or the landscape of the local minima\", \"the results indeed show some improvement, however, is this amount of improvement statistically significant and justified at a cost of even more obscure optimisation process?\", \"lack of clear timing analysis - as the authors propose an approach which supposedly helps fast convergence, why not provide detailed plots of objective/accuracy vs. epochs?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An empirical way to dynamically change the network architecture via a weighted average\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"A common setting in deep networks is to design the network first, \\\"freeze\\\" the network architecture, and then train the parameters. The paper pointed out a potential dilemma of that, in the sense that complex networks may have better representation power but may be hard to train. To address this issue the paper proposed to train the network in a hybrid fashion where simpler components and more complex components are combined via a weight average, and the weight is updated over the training procedure to introduce the more complex components, while utilizing the fast training capability of simpler ones.\\n\\nThe paper is mainly presented in an empirical way, showing the performance improvement one can obtain from that. The theory is a bit lacking: for example, a proper decay schedule between the simple and complex components may be critical for convergence, and right now it is mostly setting by hand via hyperparameter \\\\tau. However, the paper does a proper claim of its contributions and does not exaggerate it.\\n\\nI think this would be an interesting empirical paper to be presented as a workshop publication.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ROVmA279BsvnM0J1IpNn
Input-Convex Deep Networks
[ "Brandon Amos", "J. Zico Kolter" ]
This paper introduces a new class of neural networks that we refer to as input-convex neural networks, networks that are convex in their inputs (as opposed to their parameters). We discuss the nature and representational power of these networks, illustrate how the prediction (inference) problem can be solved via convex optimization, and discuss their application to structured prediction problems. We highlight a few simple examples of these networks applied to classification tasks, where we illustrate that the networks perform substantially better than any other approximator we are aware of that is convex in its inputs.
[ "networks", "deep networks", "neural networks", "inputs", "new class", "parameters", "nature", "representational power", "prediction", "inference" ]
https://openreview.net/pdf?id=ROVmA279BsvnM0J1IpNn
https://openreview.net/forum?id=ROVmA279BsvnM0J1IpNn
ICLR.cc/2016/workshop
2016
{ "note_id": [ "ZY9AEEoqYs5Pk8ELfEy0", "k80JNQvPPCOYKX7ji4Ny", "lx9KN5ozRt2OVPy8CvgD" ], "note_type": [ "review", "comment", "review" ], "note_created": [ 1457625460338, 1458271769747, 1458272022247 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/175/reviewer/10" ], [ "~Sumit_Chopra1" ], [ "ICLR.cc/2016/workshop/paper/175/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Input-convex neural networks are proposed but could be investigated more carefully to provide additional insights\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The submission investigates an interesting variant of neural networks, referred to as `input-convex.' Hereby, the composite function of a standard neural network, or more generally a neural network for structured output spaces, is restricted to be convex in the output space, i.e., the variable of interest for prediction, and optionally the input space. This translates into non-negativity constraints on some trainable parameters, as well as a convexity and non-decreasing assumption on the employed activation functions.\", \"summary\": \"--------\", \"clarity\": \"The paper is well written and the idea is easy to follow.\", \"quality\": \"The idea is generally well demonstrated, but some experiments are missing in order to judge the efficacy of the proposed modifications, e.g., providing inference and training time on MNIST as well as adding some more baselines.\", \"originality\": \"The investigated variant is new but related to recent work which should be reviewed more adequately. See comments below for details.\", \"significance\": \"Due to some missing important experiments (timing), it's hard to judge the significance of this work at the moment.\", \"pros\": \"Convexity in the output space recovers some guarantees for inference.\", \"cons\": \"Need for guarantees during inference hasn't been demonstrated and time for both inference and learning might be prohibitively expensive at the moment.\", \"comments\": \"---------\\n1. Why did the authors choose the name `input-convex' if convexity in the output space is the most desirable property? I think the title might be slightly confusing.\\n\\n2. Using the rectified linear unit as the activation function allows to rephrase inference as a large linear program. Did the authors investigate non-linear activation functions where inference amounts to solving constrained optimization problems?\\n\\n3. The non-negativity constraint on the parameters \\\\theta is missing in Eq. 5.\\n\\n4. For a reader it is desirable to get to know the difference in training and inference time between standard neural networks and the proposed `input-convex networks.' Hence, providing error over time in addition to Fig. 3 as well as a small table containing inference times seems worthwhile. I suspect inference and hence training to be time consuming for larger models, but an investigation is missing at the moment.\\n\\n5. Admittedly, 4 pages are rather constraining, but I think there is significant amount of very related work that should therefore be mentioned. E.g., work by D. Belanger and A. McCallum, `Structured Prediction Energy Networks,' and also recent work combining structured prediction with deep learning, e.g., by M. Jaderberg et al. (Deep Structured Output Learning for Unconstrained Text Recognition), S. Zheng et al. (Conditional Random Fields as Recurrent Neural Networks), L.-C. Chen et al. (Learning Deep Structured Models), A. Schwing and R. Urtasun (Fully Connected Deep Structured Networks) and references therein.\", \"minor_comments\": [\"---------------\", \"Aside from one note, the output space (\\\\cal Y) is never defined formally. It might be worthwhile to at least specify it explicitly for the experiments.\", \"I wasn't able to open the document in Acrobat. The authors may want to check.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting variant of deep networks for potential use in structure prediction problems.\", \"comment\": \"The paper proposes a novel neural network model which can be potentially used for structure prediction problems. The convexity property over its inputs leads to fast inference during test time. However the constraints the model needs to satisfy seem too restrictive.\"}", "{\"title\": \"The paper proses a novel variant to the standard neural networks which can potentially be used for fast inference in structured prediction settings. However its usefulness is questionable.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Comment: Summary:\\nThe paper presents a novel variant to the standard neural network architecture. Under certain constrains the proposed architecture is convex in their input space. This convexity property facilitates fast inference over a subset of input variables making them highly suitable for structured prediction problems. The authors report rather simplistic experiments to back their claim.\", \"novelty\": \"The ideas proposed in the paper are fairly novel and very well motivated.\", \"clarity\": \"The paper is very well written and easy to read.\", \"significance\": \"While the ideas proposed in the paper sound quite impactful especially in the structured prediction setting, I have some serious reservations with respect to their actual utility. For starters, the constraints specified with the model are rather too restrictive. There are a large number of non-linear activation and pooling functions which one will not be able to use. In addition the non-negativity constraint on the parameters is even more restrictive. As a result the usefulness of the proposed model is not very convincing.\", \"quality\": \"While the paper is well written, the experimental section is extremely weak: almost non-existent. It is a bit strange that the authors motivate their proposed model by listing its extreme usefulness for structure prediction problems. However they validate their claim on two rather simplistic dataset: a toy dataset, and mnist. I wonder why.\", \"pros\": \"The paper presents a novel model which could potentially be used for fast structure prediction using deep networks. The paper is very well written and easy to read.\", \"cons\": \"While the model is interesting, it has some rather significant drawbacks. The constraints are quite limiting to make the model of any use for a real problem. In addition the experimental section of the paper is almost non-existent: the authors train and test their model on a synthetic data set and an mnist dataset. First, the authors do not compare their model against any other baseline. Second, the model was motivated to be useful for structure prediction problems, however it is tested on something completely different.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
L7VOrG6lVsRNGwArs4qo
Do Deep Convolutional Nets Really Need to be Deep (Or Even Convolutional)?
[ "Gregor Urban", "Krzysztof J. Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan Shengjie Wang", "Rich Caruana", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson" ]
Yes, apparently they do. Previous research by Ba and Caruana (2014) demonstrated that shallow feed-forward nets sometimes can learn the complex functions pre- viously learned by deep nets while using a simi- lar number of parameters as the deep models they mimic. In this paper we investigate if shallow models can learn to mimic the functions learned by deep convolutional models. We experiment with shallow models and models with a vary- ing number of convolutional layers, all trained to mimic a state-of-the-art ensemble of CIFAR-10 models. We demonstrate that we are unable to train shallow models to be of comparable accu- racy to deep convolutional models. Although the student models do not have to be as deep as the teacher models they mimic, the student models apparently need multiple convolutional layers to learn functions of comparable accuracy.
[ "deep", "shallow models", "deep convolutional nets", "convolutional", "functions", "models", "student models", "previous research", "caruana" ]
https://openreview.net/pdf?id=L7VOrG6lVsRNGwArs4qo
https://openreview.net/forum?id=L7VOrG6lVsRNGwArs4qo
ICLR.cc/2016/workshop
2016
{ "note_id": [ "lx9AMy7JXH2OVPy8CvYo", "E8VDZoYlkh31v0m2iDQW", "lx9rypE1wF2OVPy8CvyW" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1458067795506, 1458060613271, 1457132999507 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/155/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/155/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/155/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Nice to see an empirical negative result\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The paper is interesting.\\n\\nBayesian optimization (BO) is used to support the claim that there are no shallow models that are as good as the best deep models for CIFAR-10.\\nRationale being if BO couldn't find hyperparameters and a learning algorithm to train such a good shallow model, then there is no such shallow model to be found. This evidence is strongest when you search a large set of shallow models and training algorithms, the BO search appears to have converged, and BO search has found models at least as good as the best ones known to be in the search space.\", \"re\": \"finding the best known models at each depth\\nThis appears to be true but a resume of recent high-scores' citations would be appropriate.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"The paper confirms the importance of being deep and convolutional empirically by showing that the shallow models cannot achieve as high accuracy as deep and convolutional counterparts. The experiments has been held extensively and the conclusion made in the paper sounds quite convincing even though it is based only on empirical results.\\n\\nOverall, the claim of the paper is not surprising, but many details in the paper such as architecture selection or the effectiveness of distillation would be good to be presented. Nevertheless, it would be great if authors can provide more analysis why and when the shallow network fails to be as good as deep network than simply presenting the numbers. \\n\\nIt'll be good to provide training loss for student model and compare with the teacher model to show less overfitting.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \".\", \"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"review\": \"This paper extends the work of Ba and Caruana, Do deep nets really need to be deep? by asking the same question about deep _convolutional_ nets, and reaches the opposite conclusion in this new context.\\n\\nI don't think the conclusion (that deep convents work better than not-deep-convnets on images) is going to surprise anyone; but, the architecture search is quite extensive, so at least this paper provides some circumstantial evidence to support the commonly held intuition.\\n\\nI do wish the authors had been similarly meticulous when writing the paper as they were when running experiments. There are a lot of moving parts involved here and I would have really appreciated if some effort was made to synthesize the results in a comprehensible way, rather than simply dumping the all the details into paragraphs of latex and expecting the reader to untangle them.\\n\\nFor example, understanding what is \\\"Teacher CNN 1\\\" on page 3 requires digging into Section 4.7 in the appendix, finding the paragraph talking about 129 CIFAR models trained that talks about the performance of the first and _fifth_ best model, to finally discover that this is the top three performing models from the \\\"Super Teacher\\\" ensemble (which then requires a bit more digging to verify that this is the same thing as the \\\"Ensemble of 16 CNNs\\\" from the table on page 3).\\n\\nI am recommending accepting this paper because the experimentation is quite thorough and I think the ICLR workshop is the right venue to to present something like this, but I strongly encourage the authors to spend some effort making tables and diagrams and organizing the presentation of their hyperparamter search in a way that is comprehensible. I appreciate the level of detail, especially in a paper supporting a negative result with experiments, but the presentation needs serious work.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
yovBjmpo1ur682gwszM7
Fixed Point Quantization of Deep Convolutional Networks
[ "Darryl D. Lin", "Sachin S. Talathi", "V. Sreekanth Annapureddy" ]
In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we formulate and solve an optimization problem to identify the optimal fixed point bit-width allocation across layers to enable efficient fixed point implementation of DCNs. Our experiments show that in comparison to equal bit-width settings, optimized bit-width allocation offers >20% reduction in model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78% error-rate on CIFAR-10 benchmark.
[ "dcns", "point quantization", "deep convolutional networks", "performance", "point implementation", "accuracy", "benchmark", "recent years", "complex architectures", "deep convolution networks" ]
https://openreview.net/pdf?id=yovBjmpo1ur682gwszM7
https://openreview.net/forum?id=yovBjmpo1ur682gwszM7
ICLR.cc/2016/workshop
2016
{ "note_id": [ "L7VjyN649TRNGwArs4jA", "BNYVJpgqwU7PwR1riXWz", "0YrwzL9g8uGJ7gK5tROW" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457641776538, 1457636072992, 1457638894367 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/187/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/187/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/187/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Interesting idea\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes a layers wise adaptive depth quantization of DCNs, giving an better tradeoff of error rate/ memory requirement than the fixed bit width across layers.\", \"some_points_are_not_clear_in_the_paper\": [\"how do you fine tune after quantization? The statement \\\"... 6.78% with floating point weights and 8-bit fixed point activations \\\" is not clear.\", \"-Seems that the convolutional layers are the only one quantized and they have less parameters than the the fully connected layers, how does the number of parameters growing up impact the performance?\", \"How do you choose kappa?\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good paper, accept\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper builds further upon the line of research that tries to represent neural network weights and outputs with lower bit-depths. This way, NN weights will take less memory/space and can speed up implementations of NNs (on GPUs or more specialized hardware).\\n\\nIn this paper they present a new heuristic for choosing the bit-depth of every layer differently, so as to trade-off speed, accuracy and model size more optimally then when using an equal bit depth for every layer.\", \"pro\": \"The paper is well written and the results are interesting and new.\", \"cons\": \"More experiments would be helpful. E.g., numbers on ImageNet, better comparison with previous art where they focus on fixed point networks during training.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"interesting idea but more experimental results needed\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors describe an optimization problem for determining the bit-width for different layers of DCNs for reducing model size and required computation.\\n\\nThe described method is interesting and seems easy to implement. Experimental results on CIFAR-10 illustrate the benefits of approach with modest reduction in model size.\\n\\nHowever, more experiments and details should be given.\\nWhat about testing networks with just fully-connected layers? Is it the case that the ImageNet issue holds for CIFAR-100 as well? Details of how they fine tune after quantization? What is the previous state-of-the-art classification error on CIFAR-10 with fixed point? Plot w/ FLOPS required after quantization?\\nThese would help give better understanding of the significance of the work.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
K1VNq41L8f28XMlNCVvZ
Applying Representation Learning for Educational Data Mining
[ "Milagro Teruel", "Laura Alonso Alemany" ]
Educational Data Mining is an area of growing interest, given the increase of available data and generalization of online learning environments. In this paper we present a first approach to integrating Representation Learning techniques in Educational Data Mining by adding autoencoders as a preprocessing step in a standard performance prediction problem. Preliminary results do not show an improvement in performance by using autoencoders, but we expect that a fine tuning of parameters will provide an improvement. Also, we expect that autoencoders will be more useful combined with different kinds of classifiers, like multilayer perceptrons.
[ "educational data mining", "autoencoders", "representation learning", "improvement", "educational data", "area", "interest", "increase", "available data", "generalization" ]
https://openreview.net/pdf?id=K1VNq41L8f28XMlNCVvZ
https://openreview.net/forum?id=K1VNq41L8f28XMlNCVvZ
ICLR.cc/2016/workshop
2016
{ "note_id": [ "ANYv5Gw1ZhNrwlgXCq9N", "K1VM7kjx9s28XMlNCVo3" ], "note_type": [ "review", "review" ], "note_created": [ 1457490821587, 1458085979915 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/162/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/162/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"good problem domain, but no clear contribution\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper applies autoencoders in place of matrix factorization to induce latent representations of learners and problems in the KDD educational data mining challenge. Results are negative, underperforming a very weak \\\"average performance\\\" baseline.\\n\\nThis may be an interesting domain for representation learning, but the paper does not yet make a clear contribution. The problem formulation was already given in the KDD cup. The proposed approach is not original, nor is it customized to the problem in any way -- it's just a drop-in replacement for matrix factorization. The results are negative, failing to outperform a baseline that does not adapt to the student, while matrix factorization does yield a slight improvement. This suggests that the autoencoder may not be correctly applied.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting application, but no clear contribution\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper applies representation learning techniques, i.e., autoencoders, to learn hidden representations on KDDCup 2010 educational data. The reported results are not encouraging though.\\n\\nIt's an interesting new domain to apply representation learning techniques, but the technical contribution of the paper is very limited. The authors applied autoencoders out-of-box from some existing library, and reported numbers. Adding these latent representation did not bring any add-on value to the baseline suggests that it was probably not correctly applied. The paper is too short for readers to get much details on what have been tried, and what worked or didn't, and why.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
XL92M93mzhXB8D1RUWBz
Learning Document Embeddings by Predicting N-grams for Sentiment Classification of Long Movie Reviews
[ "Bofang Li", "Tao Liu", "Xiaoyong Du", "Deyuan Zhang", "Zhe Zhao" ]
Bag-of-ngram based methods still achieve state-of-the-art results for tasks such as sentiment classification of long movie reviews, though semantic information is partially lost for these methods. Many document embeddings methods have been proposed to capture semantics, but they still can't outperform bag-of-ngram based methods on this task. In this paper, we modify the architecture of the recently proposed Paragraph Vector, allowing it to learn document vectors by predicting not only words, but n-gram features as well. Our model is able to capture both semantics and word order in documents while keeping the expressive power of learned vectors. Experimental results on IMDB movie review dataset show that our model outperforms previous deep learning models and bag-of-ngram based models due to the above advantages.
[ "sentiment classification", "methods", "document embeddings", "long movie reviews", "semantics", "model", "long movie", "results", "tasks", "semantic information" ]
https://openreview.net/pdf?id=XL92M93mzhXB8D1RUWBz
https://openreview.net/forum?id=XL92M93mzhXB8D1RUWBz
ICLR.cc/2016/workshop
2016
{ "note_id": [ "Jy94O37KXhqp6ARvt5Ov", "GvVy9V7gOh1WDOmRiMYm", "0YrlVVoqwCGJ7gK5tRLE", "yovVQW8vVir682gwszRM", "1Wv0gK0z2tMnPB1oin11", "MwV0A2P6Ocqxwkg1t7GP" ], "note_type": [ "comment", "review", "comment", "review", "comment", "comment" ], "note_created": [ 1457678766913, 1458386235358, 1458200783464, 1457527974183, 1457678904525, 1458112299513 ], "note_signatures": [ [ "~li_bofang1" ], [ "~Nal_Kalchbrenner1" ], [ "~li_bofang1" ], [ "ICLR.cc/2016/workshop/paper/1/reviewer/10" ], [ "~li_bofang1" ], [ "~richard_socher1" ] ], "structured_content_str": [ "{\"title\": \"Author comment\", \"comment\": \"The review version of the workshop paper should be limited within 3 pages, so we have to abandon some detail. \\n\\nIn case anyone is interested, a full 7-page version of our paper can be found at http://arxiv.org/abs/1512.08183 , which contains more examples, model analysis and experimental results. We will use this version for subsequent publication.\"}", "{\"title\": \"ICLR 2016 workshop paper 1 reviewer 11\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose a model to learn embeddings for documents that does not require supervision.\", \"pros\": [\"the model is simple\", \"good results on the IMDB dataset\"], \"cons\": [\"the large number of n-grams can create sparsity issues in the presence of many rarely occurring n-grams; negative sampling can also be very noisy when choosing from such a large set of candidates.\", \"to what extent do the document embeddings actually preserve the word order in the document? how do sentence embeddings look in this method and how do they compare with other methods such as skip-thought vectors?\", \"lack of analysis and verification of robustness of the model on multiple different datasets\", \"I marginally recommend the paper for acceptance to the workshop, also in view of the longer and more thorough version of the paper that is already available.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"author respond\", \"comment\": \"Dear reviewer:\\n\\nThanks for your comments. The document vectors are eventually sent to a logistic regression classifier for sentiment classification. We are very sorry for omitting this important information when we reduce this paper to the 3-page review version.\", \"more_details_of_the_training_process_can_be_found_in_the_original_7_page_version_of_this_paper\": \"http://arxiv.org/abs/1512.08183, and we will further improve the writtings in that version for subsequent publication.\\n\\nBest regards\\nBofang\"}", "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes an alternative to a paragraph vector model, using a single vector to jointly predict a word and its ngram context.\\n\\nThe model works well when applied to a sentiment analysis task, slightly outperforming other approaches.\\n\\nOverall, the model proposed seems reasonable although not particularly novel. However the paper in its current form suffers from a lack of analysis and motivation. For instance it is not clear to me how the predictive power of a given context causes the PV model to learn insufficient document representations (paragraph 2, Model section). \\n\\nSimilarly the argument that ngram features cannot be used for the Paragraph Vector model doesn't quite make sense. Clearly in that formulation it would also be possible to use ngram context (excluding the word to be predicted). For both model formulations p(word, context | vector) and p(word | context, vector) it would have been good to explore ngram features to make the comparison more complete.\\n\\nI marginally lean towards accepting the paper on the understanding that this is a workshop and in the hope that the authors would extend their analysis in a subsequent publication.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"author respond to review 'ICLR 2016 workshop paper 1 reviewer 10'\", \"comment\": \"Dear reviewer:\\n\\nThanks for your patience and kind review for our paper.\\n\\nThe review version of the workshop paper should be limited within 3 pages, so we have to abandon some details. In case anyone is interested. Full 7-page version of our paper can be found at http://arxiv.org/abs/1512.08183, which contains more examples, model analysis and experimental results. We will use this version for subsequent publication.\", \"explanations_about_the_expressive_power_of_document_vector_of_pv_and_dv\": \"Yes, thanks to your advice, we have already modified the 7-page paper to clarify this (v3, uploading). For PV (model formulation: p(word | vector, context)), the vector predicts word with the 'help' of context, so the vector does not have to be very powerful. On the contrary, for DV-uni (model formulation:p(word | vector)), the vector predicts word by itself alone. The vector has to learn everything by itself, which results in more predictive power. Experimental results also have confirmed this (section 3.3 paragraph 2, DV(89.60) outperforms PV(88.73))\\n\\nExplanations about why we didn\\u2019t integrate ngram with PV: \\n\\nDue to the same reason above, we believe that p(word | vector, context/ngram) will only perform worse. In this formulation, vector predicts words with more 'help' of both context and ngram context. The only formulation we believe may be useful is p(word, context/ngram | vector, context/ngram), but it seems somewhat strange by using context to predict context, and we will study it in our future work. Thanks very much for your suggestions.\\n\\nBest regards\\nBofang\"}", "{\"title\": \"Review\", \"comment\": \"This paper proposes a new paragraph-vector like approach to document classification.\\n\\nThe paper is poorly written (\\\"there are no n-gram can be specified\\\") and it's not even clear how the paragraph vectors are eventually being used to classify the documents.\\nI hope that the authors improve the paper and describe how they classify the documents using the vectors.\\n\\nIt's a short paper and the performance on IMDB is very good. It's a hard dataset and a nice achievement so I still lean towards accept.\"}" ] }
OM0jKROjrFp57ZJjtNkv
Neural Network Training Variations in Speech and Subsequent Performance Evaluation
[ "Ewout van den Berg", "Bhuvana Ramabhadran", "Michael Picheny" ]
In this work we study variance in the results of neural network training on a wide variety of configurations in automatic speech recognition. Although this variance itself is well known, this is, to the best of our knowledge, the first paper that performs an extensive empirical study on its effects in speech recognition. We view training as sampling from a distribution and show that these distributions can have a substantial variance. These observations have important implications on way results in the literature are reported and interpreted.
[ "variations", "speech", "variance", "neural network", "subsequent performance evaluation", "work", "results", "neural network training", "wide variety" ]
https://openreview.net/pdf?id=OM0jKROjrFp57ZJjtNkv
https://openreview.net/forum?id=OM0jKROjrFp57ZJjtNkv
ICLR.cc/2016/workshop
2016
{ "note_id": [ "vl6qw8NMYH7OYLG5in8q", "lx9o0N4jZT2OVPy8Cvyy", "71BERYqPlfAE8VvKUQqG", "0YrxNE3vKCGJ7gK5tR29" ], "note_type": [ "review", "comment", "review", "review" ], "note_created": [ 1458583392386, 1455946075227, 1457646936062, 1456642427341 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/194/reviewer/11" ], [ "~Han_Xiao1" ], [ "ICLR.cc/2016/workshop/paper/194/reviewer/12" ], [ "~Dong_Yu1" ] ], "structured_content_str": [ "{\"title\": \"A thoughtful and interesting analysis\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper presents a detailed analysis of the random effects of the training sample order and weight initialisation on the WER of two ASR tasks. The interesting finding is that this randomisation can in fact lead to substantial changes in performance -- such that similar magnitudes of variation would often be taken as significant differences between algorithms. This is an important, and perhaps shocking finding. The results are given added credibility by the use of state-of-the-art ASR systems in all work.\\n\\nThe paper is accompanied by an thorough and interesting analysis. Particularly interesting is the comparison of the random effects of training data ordering vs. network initialisation.\\n\\nOf course, one negative aspect, as the authors note themselves, is that this study is extremely computationally expensive, and for real practical benefit to come from it, more efficient methods would need to be found. However, this in no way diminishes the interest of this timely work.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"I am a freshman for this website.\", \"comment\": \"I am interested in this paper and don't know how to read its reviews?\\nThis paper is really good.\\n\\nHey, I am sorry for disturbing others, but I really dont know how to delete this post?\\nWho can help me?\\n\\n: )\"}", "{\"title\": \"issue worth reminding\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper shows experimentally that error bars for the exactly same NN models trained with exactly same data but with different initial seeds of pseudo-random number generators (initial weights, data shuffling, etc.) can be really significant. The range between best and worst models often exceeds rather genuine and incremental improvements reported in ASR field in recent years (which itself are rather inflated, since if we were improving by ~10\\\\% relative each year, ASR problem would be solved long time ago - this is just a digression, not a complaint regarding this particular paper of course). The findings are not new and well known to many who trained NNs but I think worth reminding, and a workshop is a very good venue for it.\\n\\nI know you reference papers for your model configurations (sorry, my fault I didn't had a chance to check if they contain this information) but it would be a good idea to put more details on how exactly you initialise the models - what range of initial parameters, is pre-training used, etc. -- since this paper is mostly about this aspect, it would be nice to have them in-place.\\n\\nOne criticism I have is the experiments investigate only sigmoid models, which are known to be particularly sensitive to weights initialisation, which can heavily affect training dynamics in deep models. It would be really nice if you could try if (and to what extent) this issue persist with piece-wise linear units.\\n\\nAlso, since you already have this, could you plot or write somewhere a short comment whether and to what extent training objective is correlated with the obtained WERs, is it at least monotnous? Otherwise, even if one was able to derive some uncertainty bounds on the NN outputs, this still would be an unsatisfactory predictor of WERs, at least with CE criterion.\\n\\nRegarding this sentence \\\"Interestingly, the starting point seems to be much more important than the network quality used to generate the lattices\\\" is not that surprising to me. Any denominator lattices will do, given the right paths (or right kind of mistakes) are in them, and those models are likely to make similar ones anyway.\\n \\nIt is also interesting and somehow a counter example of claims (of, by the way excellent paper, of Choromanska et al.,2014) that for sufficiently large models, it is rather hard to end up in poor or saddle point minima, and that most local minima will do a good job. It's subjective, but perhaps 15.5% for SWBD (the worst minima you report) isn't that bad at all.\\n\\nFig. 1 typo in the word cross-entropy *loos*\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"well known conclusion but maybe useful empirical result\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper studies the variance in the results of neural network training on a wide variety of con\\ufb01gurations in automatic speech recognition. It raises the question on how to compare two deep learning models when it's difficult and sometimes impossible to run many experiments.\\n\\nThis variance problem is well known. The main contribution of this work is running many empirical experiments to demonstrate the problem in ASR and alerting readers on the right way to compare two different models. I think it can bring some values to the community. However, if a practical formula for comparing different models can be provided the significance of the paper would be greatly improved.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
oVgoWpz5LsrlgPMRsB1v
Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series
[ "Maximilian Sölch", "Justin Bayer", "Marvin Ludersdorfer", "Patrick van der Smagt" ]
Approximate variational inference has shown to be a powerful tool for modeling unknown, complex probability distributions. Recent advances in the field allow us to learn probabilistic sequence models. We apply a Stochastic Recurrent Network (STORN) to learn robot time series data. Our evaluation demonstrates that we can robustly detect anomalies both off- and on-line.
[ "variational inference", "anomaly detection", "time series", "powerful tool", "unknown", "complex probability distributions", "recent advances", "field", "probabilistic sequence models" ]
https://openreview.net/pdf?id=oVgoWpz5LsrlgPMRsB1v
https://openreview.net/forum?id=oVgoWpz5LsrlgPMRsB1v
ICLR.cc/2016/workshop
2016
{ "note_id": [ "XL9yvYnBjIXB8D1RUGkq", "VAVXVGM0zTx0Wk76TAyR" ], "note_type": [ "review", "review" ], "note_created": [ 1457631662519, 1457457237967 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/105/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/105/reviewer/12" ] ], "structured_content_str": [ "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"For the problem of anomaly detection in robot time series, the authors propose a generative approach with amortized variational inference and recurrent networks, in a similar fashion to Stochastic Recurrent Networks (STORNs) [Bayer & Osendorfer, 2014].\", \"the_paper_novelty_of_this_paper_is_the_application\": \"variational autoencoders (VAE) for anomaly detection, and in particular robot time series. Although the motivation of this paper is well explained and well written enough to convince me of the importance of this application, the description of the algorithm and experimental procedure lacks details and clarity which hold back the paper's potential quality.\", \"pros\": [\"motivation well written and well explained\", \"interesting application and approach to the problem\"], \"cons\": [\"lack of clarity/details:\", \"the newly introduced variables h_g and h_p in Eq (1) hide the actual dependencies between z_{1:T} and x_{1:T};\", \"the way the approximate posterior is parametrized in Eq (2) is not described at all (although the \\\"step-wise lower bound\\\" restricts the class of approximate posteriors);\", \"the anomaly detection algorithm from the trained VAE is not described well enough both for off-line and on-line (an equation might clarify), only the way to find the threshold is slightly described.\", \"Releasing data + code and revising the paper accordingly would be very helpful in understanding the paper more.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting application of applying variational inference to anomaly detection to robotic data. Unfortunately the paper lacks in clarity and experimental validation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": [\"For the problem of anomaly detection in time series robot data, the paper evaluates a generative time series model. For this they propose to use variational inference (VI) with recurrent networks, so called Stochastic Recurrent Networks (STORNs) [Bayer & Osendorfer, 2014].\", \"Positive points\", \"Important problem.\", \"Interesting application of VI and STORNs to robot data\", \"Quantitative evaluation on off-line data and qualitative evaluation on on-line data\"], \"weaknesses\": [\"1.\\tClarity:\", \"The paper is not self contained, in the sense that all formulas (1),(2),(3) are not sufficiently explained, both symbols and content, to understand them form this paper.\", \"Abbreviations are not introduced before they are used, e.g. \\u201cVI\\u201d, page 1\", \"2.\\tExperimental setup:\", \"Of the 10 waypoints, how many are actually traversed in each experiment?\", \"3.\\tExperimental evaluation:\", \"The main goal of the paper, *on-line* anomaly detection is not evaluated quantitatively, only qualitatively, why?\", \"The off-line detection [Figure 1] could have compared to Milacski et al, 2015.\", \"The experimental results are not discussed sufficiently; it is not clear what conclusions are drawn from them.\", \"An important aspect for anomaly detection is, how likely the learned model (in this case the model + threshold) would transfer to an unknown set (test set). Unfortunately, the paper only presents results of the ROC curve which sort of is upper bound on the performance. The paper should determine the threshold for given false positive rate on the training set, and see how this generalizes to a unseen test set.\", \"This line of work seems to suffer from common training and test sets. It would thus be great if the authors would release the corresponding data and annotations.\", \"Despite limited novelty the paper would make an interesting workshop paper if the experimental evaluation would have been carried out carefully.\", \"I strongly encourage the authors to release their dataset with annotations to allow reproducible research.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ANYzpXg3LcNrwlgXCq9G
Improving Variational Inference with Inverse Autoregressive Flow
[ "Diederik P. Kingma", "Tim Salimans", "Max Welling" ]
We propose a simple and practical method for improving the flexibility of the approximate posterior in variational auto-encoders (VAEs) through a transformation with autoregressive networks. Autoregressive networks, such as RNNs and RNADE networks, are very powerful models. However, their sequential nature makes them impractical for direct use with VAEs, as sequentially sampling the latent variables is slow when implemented on a GPU. Fortunately, we find that by inverting autoregressive networks we can obtain equally powerful data transformations that can be computed in parallel. We call these data transformations inverse autoregressive flows (IAF), and we show that they can be used to transform a simple distribution over the latent variables into a much more flexible distribution, while still allowing us to compute the resulting variables' probability density function. The method is computationally cheap, can be made arbitrarily flexible, and (in contrast with previous work) is naturally applicable to latent variables that are organized in multidimensional tensors, such as 2D grids or time series. The method is applied to a novel deep architecture of variational auto-encoders. In experiments we demonstrate that autoregressive flow leads to significant performance gains when applied to variational autoencoders for natural images.
[ "autoregressive networks", "variational inference", "inverse autoregressive flow", "variational", "vaes", "latent variables", "variables", "simple", "practical", "flexibility" ]
https://openreview.net/pdf?id=ANYzpXg3LcNrwlgXCq9G
https://openreview.net/forum?id=ANYzpXg3LcNrwlgXCq9G
ICLR.cc/2016/workshop
2016
{ "note_id": [ "k80Yn6JjgtOYKX7ji49K", "NL6V98ox6F0VOPA8ix0l", "91E0yx45wfkRlNvXUV3J", "WL9xVR1Zlc5zMX2Kf2mW" ], "note_type": [ "review", "review", "comment", "review" ], "note_created": [ 1457635468275, 1457706478210, 1457113445993, 1457647602245 ], "note_signatures": [ [ "~Danilo_Jimenez_Rezende1" ], [ "ICLR.cc/2016/workshop/paper/193/reviewer/11" ], [ "~Tim_Salimans1" ], [ "ICLR.cc/2016/workshop/paper/193/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"The paper is well written and clearly explained. It introduces an original type of normalizing flow for inference networks. The proposed model significantly improves the scalability and general usability of previous related work.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"This paper introduces a new form of Normalizing Flows, the Inverse Autoregressive Flow, that is constructed by inverting an autoregressive map.\\nThe resulting maps are easier to scale compared to previously proposed normalizing flows and autoregressive posterior networks. These flexible maps are used to equip VAEs with better inference networks.\\n\\nThe originality of this paper lies in the key observation that, while being as flexible as Autoregressive maps, Inverse Autoregressive maps do not require sequential computation and can be easily parallelized.\\nFurthermore, as for Normalizing Flows, it is cheap to compute the log-det-Jacobian terms for Inverse Autoregressive Flows. These terms are necessary to compute the likelihood of transformed samples.\\n\\nThe paper is well written and clearly explained. Perhaps the order of its narrative could be improved a bit by first introducing the challenges of flow-based inference models (e.g. sequentiality in autoregressive models and the usual O(d^3) cost in computing log-det-Jacobian terms). Once these challenges are explained, the authors could then explain how the Inverse Autoregressive Flows address them and only then show that they can be interpreted as the inverse of an autoregressive map.\\n\\nThe authors show that training convolutional VAEs with the Inverse Autoregressive Flows as inference network results in model with better log-likelihoods. \\nInstead of reporting the bound and the estimated likelihoods in Table 1, it would be good to show the estimated likelihoods and the KL-divergence between the true posterior and the variational posterior (difference between the variational bound and the estimated log-likelihoods). In this way, the readers could see more easily that the KLD between the true posterior and the variational posterior is actually smaller.\\n\\nThe idea introduced in this paper lives in the intersection of previous work such as Normalizing Flows, NICE and MADE. But the particular instantiation of the model introduced in this paper constitutes a significant improvement of previous work in terms of scalability and general applicability to amortized inference.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The paper introduces new algorithm for improving approximate posterior in variational inference and obtains very good results on images, both without and with this technique.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"Pros: Authors extend normalizing flows to a more powerful family of function - autoregressive functions. Importantly despite autoregressive structure the computations are all parallel which is obtained by running the autoregressive transformation in the opposite direction. Second authors did a large amount experimentation with variational auto encoder architectures and obtain a very good performance in cifar dataset both with and without the inverse auto-regressive flow.\", \"cons\": \"While they did explain the network details in the text, it is hard to parse it accurately enough to be able to reproduce. It would be good if they wrote a table or diagram with all the transformations and all the details. They can also publish the code.\", \"additional_comments\": \"It would be good to emphasize which z is fed into the decoder z_K or z_0 (I assume z_K). \\n\\nAbout the equivalence in section 4.4: The autoregressive prior model is a different model then the model of this paper. One can indeed change coordinates of the model of this paper so that the prior is autoregressive and approximate posterior is diagonal, but then one also has to change the model so that it undoes the autoregressive prior - defeating the purpose of autoregressive prior - making cheap deep generative model (deep \\u201calong the layer\\u201d)\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Updated paper\", \"comment\": \"Here's a slightly updated version of our workshop submission:\", \"https\": \"//drive.google.com/file/d/0B3OM09ncycBoZF93UWN6a1hFWDQ/view?usp=sharing\"}", "{\"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper proposes to use the Gaussianization transform of an autoregressive model as an efficient way to implement the encoder of a variational autoencoder.\\n\\nThe proposed approach makes a lot of sense to me. Both the inverse / Gaussianization transform of a well-working autoregressive model and the posterior of a good variational autoencoder with Gaussian prior should on average map to an approximately Gaussian random variable, so it is reasonable to assume that the former is a useful building block for the latter.\\n\\nThe comparisons seem a bit limited but appropriate for a workshop paper.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ZY9xMOwxPf5Pk8ELfEjV
Unsupervised Learning with Imbalanced Data via Structure Consolidation Latent Variable Model
[ "Fariba Yousefi", "Zhenwen Dai", "Carl Henrik Ek", "Neil Lawrence" ]
Unsupervised learning on imbalanced data is challenging because, when given imbalanced data, current model is often dominated by the major category and ignores the categories with small amount of data. We develop a latent variable model that can cope with imbalanced data by dividing the latent space into a shared space and a private space. Based on Gaussian Process Latent Variable Models, we propose a new kernel formulation that enables the separation of latent space and derive an efficient variational inference method. The performance of our model is demonstrated with an imbalanced medical image dataset.
[ "imbalanced data", "data", "latent space", "unsupervised", "current model", "major category", "categories", "small amount", "latent variable model" ]
https://openreview.net/pdf?id=ZY9xMOwxPf5Pk8ELfEjV
https://openreview.net/forum?id=ZY9xMOwxPf5Pk8ELfEjV
ICLR.cc/2016/workshop
2016
{ "note_id": [ "xnrpMMM86C1m7RyVi3R4", "XL91Bw38nTXB8D1RUG9v", "jZ9yOZQNYinlBG2Xfzn0" ], "note_type": [ "review", "comment", "review" ], "note_created": [ 1457644644770, 1457954736802, 1458065907151 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/147/reviewer/10" ], [ "~Zhenwen_Dai1" ], [ "ICLR.cc/2016/workshop/paper/147/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The paper proposes an extension of the GP-LVM model for the problem of unsupervised learning with unbalanced categories. The aims is to avoid the dominance of the main category when modeling the data. The idea of the article is to decompose the learned latent representation in two pieces: one latent space is shared across all the data while another latent space is used to model each category. Such assumption is obtained by defining a new kernel function which is a sum of a kernel on the first latent space plus a category dependent kernel on the second space. Experiments are made on patches extracted from biological images. The article proposes a visualization of the latent spaces, examples of generated patches but also classification performances by using a (weighted) SVM on the latent space. The model is compared to other techniques, showing the effectiveness of the approach\\n\\nThe underlying idea is simple, intuitive and interesting. It is a significant contribution for a workshop paper and I like this approach. The two first pages of the paper are very clear, but after that the writing can be improved. Particularly, equations 4 to 8 are difficult to understand without reading the original GP-LVM paper since some notations are not defined : K_{u,u} for example. Moreover, I don't really understand how kernel k' is used in equations 6 to 8 since the category information (equations 2 and 3) is missing. Some additional informations could be provided by the authors on this part. Concerning the experimental part, the results are interesting and made on different setups (generation + classification). My only concern is the way the training set is obtained: the paper focuses on unbalanced categories which is clearly the case for the original dataset composed of 146,562 patches where only 550 are positive. So it seems to be a nice use-case... But the authors decided to only keep 5000 negative images creating a more balanced dataset. What are the results of the different methods on the original (really unbalanced) dataset ? The new one is not so \\\"difficult\\\".\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reply to Reviewer 10\", \"comment\": \"We thank the reviewer for the insightful and detailed comments. Sorry for the confusion in the model explanation. We will improve the model presentation. There are some typos in equations 3 to 8. The kernel k' should not contain the catgory label (c_x and c'_x) and the kernel k_p in psi-statistics equations (4-8) should contain the category information.\\n\\nThe sub-sampling of the negative examples is to purely due to the compuational reason. Without parallelization, it is difficult to handle the size of orginal dataset (~146k). We are aiming at the original problem in future work. On the other hand, in the shown experiment which is less imbalanced, our method still does reasonably in generation task and out-performs weighted-svm in classification task.\"}", "{\"title\": \"A natural and practical extension of GP-LVMs to model data from imbalanced classes\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"The paper proposes a latent variable model for generatively modeling imbalanced data. The data considered are of the form {(y_i, c_i)}_{i=1}^N where, for example, each y_i \\\\in R^D represents a feature vector to be generatively modeled and each c_i \\\\in {1, 2, ..., C} represents a categorical label. The data are imbalanced when, for example, C=2 and the number of observations in the first class is much greater than the number in the second, e.g. writing N_i \\\\triangleq #{k : c_k = i} we have N_1 >> N_2. Modeling imbalanced data can be challenging because the latent features often end up modeling only the dominating class. This paper adapts the GP-LVM machinery to include both shared and private latent factors, thus allowing the private latent factors of each class to model class-specific details.\\n\\nThe construction is very natural. Indeed, it would not be surprising if there are some related ideas even in classical statistics, particularly in terms of factor analysis, and so for a full conference paper a related works section would be nice. However, the use of more general GP-LVM machinery is almost certainly new and has much greater reach, especially in the context of machine learning.\\n\\nI was slightly confused by the `unsupervised learning' terminology, especially since the class labels are used in the definition of the kernel (Eqs. (2) and (3)). While it's probably accurate enough to include `unsupervised' in the title, it would be nice to clarify the use of the labels in other parts of the text and notation; in particular, you could write that you aim to build a conditional probabilistic generative model p(Y | c), and that while the labels are used to structure that model, the emphasis is latent variable representation learning with a generative objective on the data Y.\\n\\nOverall, the paper is a great workshop paper and people will be interested in reading and discussing it.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
6XAwLR8gysrVp0EvsEW3
Close-to-clean regularization relates virtual adversarial training, ladder networks and others
[ "Mudassar Abbas", "Jyri Kivinen", "Tapani Raiko" ]
We propose a regularization framework where we feed an original clean data point and a nearby point through a mapping, which is then penalized by the Euclidian distance between the corresponding outputs. The nearby point may be chosen randomly or adversarially. We relate this framework to many existing regularization methods: It is a stochastic estimate of penalizing the Frobenius norm of the Jacobian of the mapping as in Poggio & Girosi (1990), it generalizes noise regularization (Sietsma & Dow, 1991), and it is a simplification of the canonical regularization term by the ladder networks in Rasmus et al. (2015). We also study the connection to virtual adversarial training (VAT) (Miyato et al., 2016) and show how VAT can be interpreted as penalizing the largest eigenvalue of a Fisher information matrix. Our main contribution is discovering connections between the proposed and existing regularization methods.
[ "virtual adversarial training", "ladder networks", "regularization", "others", "nearby point", "mapping", "regularization methods", "vat", "regularization framework" ]
https://openreview.net/pdf?id=6XAwLR8gysrVp0EvsEW3
https://openreview.net/forum?id=6XAwLR8gysrVp0EvsEW3
ICLR.cc/2016/workshop
2016
{ "note_id": [ "2xwnGxw0ZupKBZvXtQX7", "zvwWvn38QSM8kw3ZingL", "XL9jYPxZ0cXB8D1RUG75", "5QzqAxN1vIZgXpo7i3N6", "P7VnoJr49SKvjNORtJ6l", "91Ex3O98OHkRlNvXUV0M", "MwVknv8vzcqxwkg1t7Wr" ], "note_type": [ "comment", "review", "comment", "comment", "review", "comment", "review" ], "note_created": [ 1458227122208, 1457380408455, 1458228183346, 1458228335154, 1457635836912, 1458227765557, 1457625848919 ], "note_signatures": [ [ "~Mudassar_Abbas1" ], [ "ICLR.cc/2016/workshop/paper/113/reviewer/10" ], [ "~Mudassar_Abbas1" ], [ "~Mudassar_Abbas1" ], [ "ICLR.cc/2016/workshop/paper/113/reviewer/12" ], [ "~Mudassar_Abbas1" ], [ "ICLR.cc/2016/workshop/paper/113/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Comments from the authors on the reviews, updated manuscript\", \"comment\": \"We thank the reviewers for their comments and constructive criticism. We have the overall impression that the paper and the work has been appreciated by the reviewers, each of the reviews stating that the paper either presents interesting novel results (R11,R10) or advances work in an useful area of research (R12), with overall positive rating.\\n\\nWe have made a revised version of the paper in which we have tried to take into account the comments in the reviews suggesting updates, as well as possible. We find the reviews have enabled us to critically improve the document. An updated version of the manuscript is available from:\", \"https\": \"//drive.google.com/file/d/0B8h1xkyYY_msakE0bWtBNk9qOTg/view?usp=sharing\\n\\nBelow are our responses and actions to specific comments and criticisms in the reviews.\", \"review_12\": \"--------------\\n\\nWe thank the reviewer for pointing us the Bachman et al., 2014 article, a relevant article we had missed but fortunately not affecting our main contributions. We have taken a number of steps to take the issue into account in the revised version of the paper, including the following:\\n* We have removed the claim of proposing a new regularization method\\n* We now state studying as opposed to proposing the particular regularization method and state that it can be seen as an instance of the \\nPseudo-Ensemble Agreement regularization method proposed in Bachman et al., 2014.\\n\\nWe thank the reviewer for sharing helpful experiences on regularization and also comments in terms of future work directions.\", \"review_11\": \"--------------\\n\\nWe thank the reviewer for pointing out a connection of the studied regularization framework to existing methods in the literature. We would appreciate the reviewer for providing us with any references on regularizers similar to the studied one, in the field of ''graph-based semi-supervised learning\\\" mentioned, so as to improve the paper on discussing relevant related work. We had also missed a paper pointed out in review 1, and made changes to our manuscript taking that into account; see comments above for details related to these, especially noticing that the main contributions of the paper have not been affected. \\n\\nWe are thankful for the reviewer on sharing opinions on effective regularization methods. On enforcing constraints on the entire space of the output function: it is of expected interest to us to only require the function to be smooth in areas densely populated with data and the function can be non-smooth elsewhere.\", \"review_10\": \"--------------\\n\\nWe thank the reviewer for suggestions on improving the document, and we have tried to take these into account in the revised paper as follows:\\nRemark 1): the epsilon is now introduced after the definition at the end of the second paragraph in section 2.\\nRemark 2): The VAT-section (now Sec. 3.4) has been modified for clarity in terms of the minimization/maximization issue pointed out; all methods are assuming minimization, and we have introduced R_VAT and used it in describing the results, for clarity.\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"This paper studies the connection between different types of regularization for neural network training. The authors found that these regularizations can be expressed as different functionals of the singular values of the Jacobian matrix. The result is interesting and could be presented as a workshop paper. There are a few minor remarks to be taken into account.\\n\\n1.\\tIn the paragraph after Eq. (1), the notation epsilon is mentioned before its actual usage in the first paragraph on the 2nd page. It could be defined after that.\\n2.\\tIt is not clear why VAT is maximizing the maximum eigenvalue of J^TJ (minimizing the negative value of the maximum eigenvalue of J^TJ ) while the others are performing the minimization. Clarification on this would be helpful.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Author response\", \"comment\": \"Please see above for responses from the authors and for a link to a revised version of the manuscript.\"}", "{\"title\": \"Author response\", \"comment\": \"Please see above for responses from the authors and for a link to a revised version of the manuscript.\"}", "{\"title\": \"Useful area of research, method not particularly novel\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The regularizer examined in this paper was previously presented in a more general form in \\\"Learning with Pseudo-Ensembles\\\" by Bachman et al. (NIPS 2014). See Eq. 2 on page 3 of: http://papers.nips.cc/paper/5487-learning-with-pseudo-ensembles.pdf. Bachman et al. consider controlling variation in any observable property of a model's behaviour w.r.t. some distribution over perturbations of the model and/or its inputs.\\n\\nIn my experience, penalizing first-order variation in euclidean space is often only marginally useful. It suffers from a strong bias towards shrinking all network outputs towards a constant value. It may be better to penalize stochastic approximations of higher-order derivatives.\\n\\nIn the semi-supervised classification setting, penalizing variation in distribution space using KL divergence works well. The KL divergence becomes robust to relatively large changes in the raw (i.e. pre softmax) output once the predictions become confident, so it suffers less from the tendency to shrink all outputs towards a constant value. Penalizing worst-case KL divergence over a ball of fixed radius was tried in the VAT paper, and penalizing expected-case KL divergence was tried in the paper by Bachman et al. Penalizing worst-case behaviour seems to be a more effective regularizer, but comes with a higher computational cost.\\n\\nAs you develop this work further, you should consider looking at the literature on \\\"stochastic progamming\\\" (https://en.wikipedia.org/wiki/Stochastic_programming) and \\\"robust optimization\\\" (https://en.wikipedia.org/wiki/Robust_optimization). Stochastic programming is about solving an optimization problem in expectation w.r.t. some distribution over misspecification of the problem, and robust optimization is about solving an optimization problem in the worst case w.r.t. some bounds over misspecification of the problem. A lot of mathematically rigorous work has been done in these fields, which may help with your formal analyses.\\n\\n******************\\nReview summary\\n******************\\n1. The regularizer described in this paper is not particularly novel.\\n\\n2. Developing a stronger formal understanding of connections between different practical instantiations of the underlying concept (i.e. robustness to model/input perturbation) is worthwhile. This paper provides a decent step in that direction.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Author response\", \"comment\": \"Please see above for responses from the authors and for a link to a revised version of the manuscript.\"}", "{\"title\": \"An interesting view on commonly used regularizers\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes a regularization based on the distance of predictions for similar inputs. This kind of regularization has been widely used in the 2000s for graph-based semi-supervised learning but may also be applied to fully supervised learning.\\n\\nThe connection with the penalty on the Jacobian is known but the implications of that regularization when using adversarial examples is interesting and new to me.\\n\\nMy main concern is about regularizations which explicitly enforce constraints on the output of the function. There are cases where we want the output of a function to change quickly as a function of the input and I believe enforcing the same smoothness across the entire space will likely be too strong or too weak.\\n\\nDue to this concern, I'm slightly leaning towards rejection but, as this is pure personal preference, I am fine with the paper being accepted.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
XL9vKJ98DCXB8D1RUGV0
JOINT STOCHASTIC APPROXIMATION LEARNING OF HELMHOLTZ MACHINES
[ "Haotian Xu", "Zhijian Ou" ]
Though with progress, model learning and performing posterior inference still re- mains a common challenge for using deep generative models, especially for han- dling discrete hidden variables. This paper is mainly concerned with algorithms for learning Helmholz machines, which is characterized by pairing the genera- tive model with an auxiliary inference model. A common drawback of previous learning algorithms is that they indirectly optimize some bounds of the targeted marginal log-likelihood. In contrast, we successfully develop a new class of al- gorithms, based on stochastic approximation (SA) theory of the Robbins-Monro type, to directly optimize the marginal log-likelihood and simultaneously mini- mize the inclusive KL-divergence. The resulting learning algorithm is thus called joint SA (JSA). Moreover, we construct an effective MCMC operator for JSA. Our results on the MNIST datasets demonstrate that the JSA’s performance is consis- tently superior to that of competing algorithms like RWS, for learning a range of difficult models.
[ "algorithms", "jsa", "helmholtz machines", "progress", "model learning", "posterior inference", "mains", "common challenge" ]
https://openreview.net/pdf?id=XL9vKJ98DCXB8D1RUGV0
https://openreview.net/forum?id=XL9vKJ98DCXB8D1RUGV0
ICLR.cc/2016/workshop
2016
{ "note_id": [ "5Qz2PyrAxCZgXpo7i3y5", "GvV1OvLjQi1WDOmRiMQJ" ], "note_type": [ "review", "review" ], "note_created": [ 1457480986828, 1457634952104 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/159/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/159/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"An interesting perspective, large memory requirements, article needs a bit of editing, experiments could use slightly better baselines\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This is an interesting paper, offering a novel perspective on training directed graphical models.\\n\\nIt is known that learning in directed generative models using gradient ascent on the marginal log-likelihood requires one to obtain samples from the posterior probability distribution. The paper suggests getting those samples by running an MCMC chain that leaves the posterior probability invariant. The MCMC chain used is based on either independent proposals from an auxiliary distribution Q, and an MH accept-reject step, or on multiple-trial Metropolis independence sampler based on the same auxiliary distribution Q. The auxiliary distribution Q is also learnt by gradient descent on the KL divergence between the posterior and Q, where samples from the posterior are obtained in the same way as before.\\n\\nThis method of training is novel - previous methods used either MCMC chains based on Gibbs sampler (Neal, 1992 - unfortunately not cited in the article), or used optimization of a lower bound on log-likelihood, or biased estimates of the gradient of the log-likelihood.\\nThe method is most directly comparable to the Reweighted Wake Sleep method, because ultimately the updates to the parameters follow the same equations every time the proposed transition is accepted (but reuse previous samples when the transition is rejected, which is an important difference from the RWS algorithm).\\n\\nOne drawback of the proposed method is that it requires to store a state of the MCMC chain, one state of latent variables configuration per datapoint in the dataset. It might not be too restrictive for smaller datasets, like MNIST, but is prohibitively expensive for larger datasets.\\n\\nThe experiments use a published implementation of RWS as a baseline. This is unfortunately not the best practice, as the implementation of the proposed algorithm might use slightly different initialization, hyperparameters, or length of training, which makes the contribution of the algorithm itself harder to separate. This is exacerbated by the fact that the difference in log-likelihoods of trained models is fairly small (although significant). It would be better to use exactly the same initialization and hyperparameters for the RWS implementation and for the proposed algorithm.\\n\\nAnother comparison is of the proposed algorithm (MIS version) to (non-reweighted) Wake-Sleep. In this comparison the proposed algorithm converges to significantly better performing models, indicating that storing the previous states of the MCMC chain, and following the proper Metropolis accept-reject step does provide a significant advantage.\\n\\nThe paper has multiple typos and grammar issues, and would benefit from additional editing.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting new method for training Helmholtz Machines using a persistent MCMC chain\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors present a new method to perform maximum likelihood training for Helmholtz machines. This paper follows up on recent work that jointly train a directed generative model p(h)p(x|h) and an approximate inference model q(h|x). The authors provide a concise summary of previous work and their mutual differences (e.g. Table 1).\\n\\nTheir new method maintains a (persistent) MCMC chain of latent configurations per training datapoint and it uses q(h|x) as a proposal distribution in a Metropolis Hastings style sampling algorithm. The proposed algorithm looks promising although the authors do not provide any in-depth analysis that highlights the potential strengths and weaknesses of the algorithm. For example: It seems plausible that the persistent Markov chain could deal with more complex posterior distributions p(h|x) than RWS or NVIL because these have to find high probability configurations p(h|x) by drawing only a few samples from (a typically factorial) q(h|x). It would therefore be interesting to measure the distance between the intractable p(h|x) and the approximate inference distribution q(h|x) by estimating KL(q|p) or by estimating the effective sampling size for samples h ~ q(h|x) or by showing the final testset NLL estimates over the number of samples h from q (compared to other methods). It would also be interesting to see how this method compares to the others when deeper models are trained.\", \"in_summary\": \"I think the paper presents an interesting method and provides sufficient experimental results for a workshop contribution. For a full conference or journal publication it would need to be extended.\\n\\nI also found some grammatical issues and I would recommend additional proofreading.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ZY9x1mJ3zS5Pk8ELfEjD
Neural Variational Random Field Learning
[ "Volodymyr Kuleshov", "Stefano Ermon" ]
We propose variational bounds on the log-likelihood of an undirected probabilistic graphical model p that are parametrized by flexible approximating distributions q. These bounds are tight when q = p, are convex in the parameters of q for interesting classes of q, and may be further parametrized by an arbitrarily complex neural network. When optimized jointly over q and p, our bounds enable us to accurately track the partition function during learning.
[ "variational bounds", "flexible", "distributions", "bounds", "tight", "parameters" ]
https://openreview.net/pdf?id=ZY9x1mJ3zS5Pk8ELfEjD
https://openreview.net/forum?id=ZY9x1mJ3zS5Pk8ELfEjD
ICLR.cc/2016/workshop
2016
{ "note_id": [ "71BErRxxQFAE8VvKUQqW", "WL9MDrjwMI5zMX2Kf2KA", "gZ9vAQVn8tAPowrRUAZq", "MwVkVqMN1Iqxwkg1t7Wx" ], "note_type": [ "review", "review", "comment", "review" ], "note_created": [ 1457637226622, 1457325764130, 1458171486215, 1457621834825 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/191/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/191/reviewer/12" ], [ "~Volodymyr_Kuleshov1" ], [ "ICLR.cc/2016/workshop/paper/191/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"review of \\\"neural variational random field learning\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes to estimate the log-partition function of Markov random fields by optimizing a variational bound over a model of proposal distributions. For the proposal distributions the authors consider uniform mixtures of exponential families.\\n\\nThe paper is concise and presents interesting ideas. A central idea of the paper is to define a proposal model for which the variational bound is convex. \\nUnfortunately the arguments seem to be flawed or to require modifications. \\n\\nCOMMENTS\\nIn Section 3.2. ``One can easily check that a non-negative concave function is also ... are in the exponential family, it follows that $\\\\sum_k \\\\pi_k q_{\\\\phi_k}(x)$ is log-concave, and hence the above expression is convex.'' \\n\\nThis appears faulty. \\nFor an exponential family $q_{\\\\phi_k}(x)$ is log-concave in the natural parameters. \\nHowever, a sum of log-concave functions is not necessarily log-concave. \\nIn fact, one finds examples of sums of exponential families that are not entry-wise log-concave in the natural parameters. \\nKindly verify / explain / improve this. \\n\\nOTHER COMMENTS \\nIt may be worthwhile to consider the convexity problem in relation to ``convex exponential families'' and ``mixtures of exponential families with disjoint supports''. For these kinds of models, the maximizers of the likelihood function can be expressed in closed form or in terms of those of the individual mixture components. \\n\\nMINOR COMMENTS \\n* It would be good to mention whether x is discrete or continuous, scalar or vector. Also whether $\\\\theta$ is finite dimensional. \\n* In Section 2 ``closed-form expression'' is confusing. Given that $I$ is intractable, it seems that $I^2$ is also intractable. \\n* In Section 2 variance of the estimate $\\\\hat I$, a $1/n$ factor seems to be missing. \\n* In Section 2 the variance vanishes when $p=q$ can be inferred from the fact that $w(x)=I$ is constant, without using Jensen's inequality. \\n* In Section 3 ``natural algorithm for computing'' should be ``estimating''. \\n* In Section 3 ``as minimizing a tight upper bound'' should not say ``tight''. \\n* In Section 3 ``This approach is complicated by the fact that unlike earlier methods that parametrized conditional distributions $q(z|x)$ over hidden variables $z$, our setting does not admit a natural input/output to a neural network.'' Why do you need an input/output in order to use a neural network? Maybe I just don't understand this sentence. \\n* In Section 3.2 $\\\\pi_k$ has not been introduced. I suspect this is just $1/K$?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review of \\\"Neural variational random field learning\\\"\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents an intriguing idea for training MRFs (or CRFs) together with an inference network by training the inference net to minimize a variational upper bound on the partition function. Unlike standard variational inference, this bound is in the right direction for learning. Compared with methods like tree-reweighted BP, this method potentially allows for more accurate bounds, as the approximating distribution can be made arbitrarily close to the true one.\\n\\nI'm uncomfortable with referring to (1) as a \\\"variational upper bound\\\" on the partition function, since the stochastic estimate is an upper bound only in expectation, and may underestimate the true value with overwhelming probability. (E.g., consider the case where q is uniform and p is peaked.) Similarly, (2) is technically a lower bound on the likelihood, but when estimated with samples from q, it may overestimate the true value with overwhelming probability. I'm not sure what the proper wording would be, but I think the abstract over-promises as currently written.\\n\\nAs the authors point out, Monte Carlo estimates of (1) and (2) could have very high variance, just like importance sampling based estimates of Z. I suspect this problem would be very hard to overcome on full-size models. \\n\\nStill, the idea is quite neat, and could be the basis of future work on training MRFs/CRFs. I would certainly recommend acceptance to the workshop track.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Clarifying a misunderstanding about the convexity of our objective\", \"comment\": \"First of all, thank you for the detailed review!\\n\\nWe agree with most of the comments. The only issue we would like to clarify is your concern about the correctness of our argument that the log-likelihood bound is concave.\", \"our_argument_goes_as_follows\": \"1. A concave, positive function is also log-concave (because if g is concave and non-decreasing, and f is concave, then g(f(x)) is concave)\\n2. A sum of exponential families is concave and positive, hence log-concave.\\n\\nIn particular, we don't claim that a sum of log-concave functions is log-concave. We were running out of space, and our so explanation was a bit terse. Apologies for the confusion. We will make things clearer in the final version.\\n\\nIf this was your main concern, then it would be great if you could update your score; if there are other issues, please let us know, and we'll be happy to clarify!\\n\\nThanks,\\nVolodymyr\"}", "{\"title\": \"Review of \\\"Neural Variational Random Field Learning\\\"\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents a simple and intuitive approach to variational inference for undirected graphical models. The paper is concise, and the bound is derived quite simply based on the variance of an importance sampler that estimates the marginal likelihood. (This unbiased estimate brings to mind pseudomarginal samplers.) The linearity bound on the log expectation also makes sense in order to derive a lower bound which produces unbiased stochastic gradients during optimization.\\n\\nI am concerned however with the scalability of the approach to both larger data and higher dimensions (the preliminary experiment is very small in both respects). The approach seems to share the downfalls of importance sampling in general. It is unclear the extent to which recent adaptive importance sampling techniques applied to variational inference are practical (unlike Burda et al. (2016) for example, there is not a baseline to say that at the least, it does not produce a worse bound than the typical KL(p ||q)).\\n\\nNevertheless, this is an interesting direction worth exploring for undirected graphical models, and the proposed directions using expressive variational models and recent importance sampling techniques make sense.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
Qn8lxPngJFkB2l8pUYxg
Neural Text Understanding with Attention Sum Reader
[ "Rudolf Kadlec", "Martin Schmid", "Ondřej Bajgar", "Jan Kleindienst" ]
Two large-scale cloze-style context-question-answer datasets have been introduced recently: i) the CNN and Daily Mail news data and ii) the Children's Book Test. Thanks to the size of these datasets, the associated task is well suited for deep-learning techniques that seem to outperform all alternative approaches. We present a new, simple model that is tailor made for such question-answering problems. Our model directly sums attention over candidate answer words in the document instead of using it to compute weighted sum of word embeddings. Our model outperforms models previously proposed for these tasks by a large margin.
[ "datasets", "model", "neural text", "attention sum", "cnn", "children", "book test" ]
https://openreview.net/pdf?id=Qn8lxPngJFkB2l8pUYxg
https://openreview.net/forum?id=Qn8lxPngJFkB2l8pUYxg
ICLR.cc/2016/workshop
2016
{ "note_id": [ "wV61EMPqkcG0qV7mtLR1", "91vvzKgJlSkRlNvXUVGO", "E8V39o1PDf31v0m2iDp2" ], "note_type": [ "comment", "comment", "review" ], "note_created": [ 1463668964218, 1462742182306, 1457048579226 ], "note_signatures": [ [ "~Rudolf_Kadlec1" ], [ "~Felix_Hill1" ], [ "ICLR.cc/2016/workshop/paper/108/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"good point\", \"comment\": \"You are right. We will include this perspective in next version of our long paper on arxiv.\\nThank you\\nRuda\"}", "{\"title\": \"parallels between this approach and self-supervision\", \"comment\": \"This is great work, and a really nice effective modification of previous approaches for these tasks.\\n\\nMaybe the authors could also point out how elimination of the output layer has similarities with the self-supervision method we use in the Goldilocks Principle. While we compute prediction probabilities over all possible answers at test time, training with self-supervision also has the effect of bypassing the output layer, so in this sense the work generalises (and makes clearer) what we already proposed. It would be great if you could add this detail. Of course, there are also similarities with Pointer Networks. \\n\\nThank you and good luck!\\nFelix Hill\"}", "{\"title\": \"nice paper with useful insights into recently proposed QA datasets\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper proposed a simple variation on attentional models designed for QA tasks that achieves state-of-the-art results on two different datasets. The task is, when given a document and a query, output the answer to the query where the answer is contained within the document. While previous work uses the attention scores to compute a weighted average of words in the document and then feeds that average to an output layer, the authors instead select the argmax of the attention scores as the answer (summing scores when a word is repeated multiple times in a document). The drawback of this approach is that it cannot work if an answer is not contained within the document, but this isn't an issue for the datasets in question.\\n\\nThe evaluation section could be stronger; the authors remark that \\\"single models can display considerable variation of results which can then prove difficult to reproduce\\\"; it would be best to report standard deviation so readers can quantify this variation. However, the results are significantly better than attentional LSTMs and memory networks despite the relative simplicity of the proposed model. The result is interesting and I think would be a valuable contribution as a workshop paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
D1VDZ5kMAu5jEJ1zfEWL
Revisiting Distributed Synchronous SGD
[ "Jianmin Chen", "Rajat Monga", "Samy Bengio", "Rafal Jozefowicz" ]
The recent success of deep learning approaches for domains like speech recognition (Hinton et al., 2012) and computer vision (Ioffe & Szegedy, 2015) stems from many algorithmic improvements but also from the fact that the size of available training data has grown significantly over the years, together with the computing power, in terms of both CPUs and GPUs. While a single GPU often provides algorithmic simplicity and speed up to a given scale of data and model, there exist an operating point where a distributed implementation of training algorithms for deep architectures becomes necessary. Previous works have been focusing on asynchronous SGD training, which works well up to a few dozens of workers for some models. In this work, we show that synchronous SGD training, with the help of backup workers, can not only achieve better accuracy, but also reach convergence faster with respect to wall time, i.e. use more workers more efficiently.
[ "synchronous sgd", "workers", "recent success", "deep learning approaches", "domains", "speech recognition", "hinton et", "computer vision", "ioffe", "szegedy" ]
https://openreview.net/pdf?id=D1VDZ5kMAu5jEJ1zfEWL
https://openreview.net/forum?id=D1VDZ5kMAu5jEJ1zfEWL
ICLR.cc/2016/workshop
2016
{ "note_id": [ "vl6lE85MwH7OYLG5in8D", "L7VjR3lN0IRNGwArs4mM", "1WkWKBDlvFMnPB1oinmB", "wVqPGwjQ9IG0qV7mtLxk", "3QxzNJ96BSp7y9wltPQy", "p8W8xLgRzFnQVOGWfpxP" ], "note_type": [ "comment", "review", "comment", "review", "review", "comment" ], "note_created": [ 1459314228523, 1457611317770, 1459314660345, 1458161232574, 1457647072204, 1459314878261 ], "note_signatures": [ [ "~Jianmin_Chen1" ], [ "~gallinari_patrick1" ], [ "~Jianmin_Chen1" ], [ "ICLR.cc/2016/workshop/paper/80/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/80/reviewer/10" ], [ "~Jianmin_Chen1" ] ], "structured_content_str": [ "{\"title\": \"Thanks a lot for your review.\", \"comment\": \"We think the most important information about the hardware system is the K40 GPUs we use which is described in the paper. Other than that, it is basically the standard intel servers with standard networking. One thing to note (also added to the paper) is that the resources were shared with other jobs so the worker could also be slowed/preempted by other jobs besides broken hardware. And our system handles that well.\"}", "{\"title\": \"aa\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"The paper introduces a synchronous parallel stochastic gradient descent algorithm and compares its performance on different tasks with a reference asynchronous SGD. The behavior of the two algorithms are compared for different configurations (number of parallel machines - tasks).\", \"the_paper_addresses_an_important_problem\": \"defining efficient distributed algorithms for large scale deep architectures. Its contribution is to provide experimental results on different problem types and these results will certainly be interesting for a large community. The comparison of synchronous \\u2013 asynchronous optimization methods is also a topic of interest by itself and this paper contributes in this direction.\\n\\nIt is not clear in the paper if the parameter servers in the comparison have the same configuration.\\nThe algorithm setting (gradient step policy, etc) could certainly be adapted to a specific implementation (synchronous or asynchronous). It seems that the same parameter adaptation method is used for both the synchronous and asynchronous methods. Does it provide a fair comparison? These parameters might influence the performance as much as the 2 different implementations do. How robust is the comparison wrt this aspect? E.g. could the performance order be reversed with another version of SGD?\\n\\nThere is no indication on the distribution of work to the backup workers. How do you distribute data to these specific workers and how different is it from the other workers? \\n\\nThere is no discussion on mixed synchronous - asynchronous implementations. This could generalize the idea of backup workers. Could this be an interesting option and why?\\n\\nThe passage on the overlap of the gradient computation for the different layers by the end of section is not clear for me and could be made more precise.\", \"pro\": \"comparison of two different options (synchronous vs asynchronous) for parallelizing SGD. Experiments on different configurations.\", \"cons\": \"not clear (for me) if the SGD algorithm parameters setting itself is not as important as the option choice (sync. vs async) for the performance.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks a lot for your review.\", \"comment\": [\"I'm wondering if the authors have not used the gradient clipping in the RNN experiments as well. If the provided results are with clipping, I'm also wondering how the method would perform without the clipping in RNN.\"], \"ans\": \"Increasing time of slowest worker is a less severe case for a \\u201cdead\\u201d worker, the backup workers can handle this well. Even if you have too many dead workers, our system can still work with some worker doing computation for multiple batches, which will be slower but expected if the resource is limited.\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"The paper discusses a very important problem which deserves more attention from the academic community. Large-scale deep learning is important for industrial applications. The paper proposes a slight modification to classical sync-SGD where instead of waiting on all threads, the central controller only waits for a majority fraction of it, and proceeds as if it has received information from all threads. This is motivated from the observation that in multi-worker systems, typically, there are only a few outlying slow workers which negatively impact the overall system's speed (Pareto principle). Ignoring these workers should thus give a huge speedup.\\n\\nAs with any systems machine learning paper, details about the actual system used would have been very useful, i.e. more information about the hardware used.\\n\\nThe results in the paper are impressive, which makes this paper well worth talking about, and thus deserves to be in the workshop proceedings.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Revisited the potential of the synchronous SGD method for distributed training of large deep learning models by performing experiments on a few of models. The results will be useful to the community. But the novelty and originality is rather limited. It seems okay as a workshop paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"Assuming that all nodes run at about the same speeds, using the Sync-SGD instead of the Async-SGD seems a reasonable thing that one could try. And it is interesting to see actually that Sync-SGD performs better than Async-SGD.\", \"I'm wondering if the authors have not used the gradient clipping in the RNN experiments as well. If the provided results are with clipping, I'm also wondering how the method would perform without the clipping in RNN.\", \"In the DRAW experiments, the authors only mention the convergence speed, but not provide the accuracies.\", \"The Async-SGD seems to be a simple one in the class of Async-SGD methods. It would be interesting to see comparisons to more advanced Async-SGD methods as well.\", \"It would have been interesting to see how the Sync-SGD is affected by increasing the response time of the slowest worker.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Thanks a lot for your review.\", \"comment\": \"-The algorithm setting (gradient step policy, etc) could certainly be adapted to a specific implementation (synchronous or asynchronous). It seems that the same parameter adaptation method is used for both the synchronous and asynchronous methods. Does it provide a fair comparison? These parameters might influence the performance as much as the 2 different implementations do. How robust is the comparison wrt this aspect? E.g. could the performance order be reversed with another version of SGD?\", \"ans\": \"Applying the gradients of upper layers can be overlapped by the gradient computation of bottom layers. You can think of it as a streaming/pipelining so the real overhead is only waiting to collect/apply gradients on the bottom layers. We will also try to make it more clear in the paper. Thanks for pointing it out.\"}" ] }
lx9l4r36gU2OVPy8Cv9g
Resnet in Resnet: Generalizing Residual Architectures
[ "Sasha Targ", "Diogo Almeida", "Kevin Lyman" ]
ResNets have recently achieved state-of-the-art results on challenging computer vision tasks. In this paper, we create a novel architecture that improves ResNets by adding the ability to forget and by making the residuals more expressive, yielding excellent results. ResNet in ResNet outperforms architectures with similar amounts of augmentation on CIFAR-10 and establishes a new state-of-the-art on CIFAR-100.
[ "resnet", "residual architectures", "residual architectures resnets", "results", "computer vision tasks", "novel architecture", "resnets", "ability", "residuals", "expressive" ]
https://openreview.net/pdf?id=lx9l4r36gU2OVPy8Cv9g
https://openreview.net/forum?id=lx9l4r36gU2OVPy8Cv9g
ICLR.cc/2016/workshop
2016
{ "note_id": [ "q7WjxMJV4T8LEkD3t7v6", "NL6lGOZw3i0VOPA8ixRW", "P7q1N6orrHKvjNORtJZK", "ANYRpXR5lUNrwlgXCqQ0", "MwVkZ51oZIqxwkg1t7WW", "zvgmpqpvgiM8kw3ZinM9", "0YrwQoRW8FGJ7gK5tROk" ], "note_type": [ "comment", "comment", "comment", "review", "review", "comment", "review" ], "note_created": [ 1458853682426, 1457593813932, 1458853572190, 1457647216676, 1457549279302, 1458853519578, 1457658389491 ], "note_signatures": [ [ "~Sasha_Targ1" ], [ "~Diogo_Almeida1" ], [ "~Sasha_Targ1" ], [ "ICLR.cc/2016/workshop/paper/140/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/140/reviewer/12" ], [ "~Sasha_Targ1" ], [ "ICLR.cc/2016/workshop/paper/140/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Updated paper\", \"comment\": \"An updated version of the paper is posted here: https://drive.google.com/file/d/0BxX9BAoclX5fNEJTZFIxTU1qZGs/view?usp=sharing\"}", "{\"title\": \"Re: Interesting idea, and promising experiments, but have flaws\", \"comment\": \"We thank the reviewer for the comments on our paper, which bring up some important points we clarify below\\n\\n> \\\"since the ResNet Init architecture was questionable (see \\u201cCons\\u201d), building RiR with real ResNet (not ResNet Init) would be more reasonable.\\\"\\n\\nWe also tried the architecture of a ResNet in Resnet architecture using standard ResNets for both the inner and outer connections, and this didn't perform significantly differently from standard ResNets in the architectures we've tried. Because it seemed intuitive that having another set of identity connections would not cause a large difference in behavior, we did not include this result.\\n\\n> \\\"no clear evidence was provided on why the ordinary network can be better than ResNet\\\"\\n\\nIt is true we did not find clear evidence that the ResNet Init was always better than a standard ResNet (though it should be noted experiments where the ResNet outperformed were on an architecture tuned for the ResNet). Our intention in presenting the ResNet Init was in the appeal of its architecture of an ordinary CNN which can be used in any existing CNN architecture but still realizes performance benefits. In light of this, we feel a fairer comparison is of the ResNet Init to ordinary CNNs. In our results, ResNet Init consistently outperforms equivalent CNN architectures (Tables 1, 2, 3, 5). \\n\\nWe conduct the experiments reported in Table 5 to show an example in which the ResNet Init is applied to an existing model, ALL-CNN-C (Springenberg et al., 2014), giving improvements in performance with no changes to the architecture beyond choice of initialization. While it would be interesting to know the performance of other ResNet architectures, unlike for ResNet Init as straightforward a comparison of residual architectures and their ordinary CNN equivalent is not possible for the standard ResNet, which adds depth and certain constraints on how dimensionality reduction occurs.\\n\\n> \\\"The good performance on Cifar might attribute to the wide architecture not rather than the proposed method\\\"\\n\\nThe insight that the ResNet was sometimes better than the ResNet Init led us to believe that the benefits were not coming from shorter credit assignment paths as many believe to be the source of the benefits of ResNets, but instead from acting as a strong regularizer on the network's activations. This hypothesis on regularization led us to want to try a shallower and wider architecture, which we hypothesized would help emphasize the strengths of the standard ResNet (though this hypothesis was clearly not sufficient since the ResNet Init performed well on this architecture). \\n\\nWe agree with the reviewer the wide architecture contributes to the good performance on CIFAR. However, as shown by inclusion of the results from equivalent non-residual wide models as baselines for the residual architectures in Tables 2 and 3, wide architectures alone were clearly not sufficient to obtain state of the art performance on these datasets. We thus believe that the RiR architecture led to a significant improvement and in fact the large number of parameters in wide models could highlight the benefit of regularization by residual architectures (see above).\\n\\n> \\\"more exploration on the RiR and \\u201cforgetting\\u201d idea is more promising than selling the ResNet Init.\\\"\\n\\nWe appreciate the reviewer's interest in the RiR architecture and our preliminary ideas for forgetting based architectures. We are currently conducting more experiments related to these architectures, which we included to try and understand if the ResNet retaining too much of the input signal is a potential issue with residual architectures. The intended focus of the paper is RiR (hence the title of ResNet in ResNet) and as indicated in the paper we find the best results from this architecture, which uses both shortcut connections and ResNet Init to yield improved performance over the standard ResNet. Because ResNet Init is a necessary part of the RiR architecture, we felt the clearest exposition included the explanation of what the ResNet Init was and we presented results of the ResNet Init compared to both ResNets and standard CNNs in order to provide a thorough analysis of this component in the RiR architecture.\"}", "{\"title\": \"re: Interesting direction, but missing/incorrect information\", \"comment\": \"Thanks to the reviewer for these comments. We have updated the paper to incorporate the feedback, which is posted here: https://drive.google.com/file/d/0BxX9BAoclX5fNEJTZFIxTU1qZGs/view?usp=sharing\\n\\nWe agree RiR maintains the fixed shallow residual subnetwork size of the ResNet architecture. However, our results with varying numbers of layers per residual block show that the RiR architecture allows training of deeper residuals compared to the original ResNet (Figure 4).\", \"a_summary_of_the_changes_in_response_to_the_review_follows\": [\"We add a related work section with more information describing similarities and differences to existing work (including LSTM, Grid-LSTM, Highway Networks, and SCRNs)\", \"We focus the paper with additional experiments on the generalized residual architecture and RiR and remove forget gate experiments\", \"We reword the descriptions stating our networks forget information which may be misleading due to the shared name with forget gates. Our intended meaning is that intermediates could be used without propagating them deeper in the network (in ResNets this can only be done for very shallow intermediates within a block)\", \"RiR architectures demonstrate good results at depths of 30+ layers (in the updated version, we include results at 150+ layers), depths at which the original ResNet already shows advantages over standard CNNs. This implies the benefits may be related to both optimization and generalization, which in our opinion are coupled problems.\"]}", "{\"title\": \"Interesting experiments\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose an initialization scheme based on some comparisons to the ResNet architecture. They also replace CONV blocks with the proposed ResNetInit CONV blocks to obtained a Resnet in Resnet (RiR). These experiments are needed, the connections made between the models in the paper are interesting.\\n\\nThat being said, a few recommendations to the authors:\\n\\n- The discussion of ResNet Init and RiR is not very clear and I did not understand Section 2 well on my first reading. Please also expand more clearly on the differences between the 4 models in terms of the runtime, or number of parameters, etc. How are the parameters W initialized? This is not mentioned in the paper as far as I can tell, which seems like a very important point - are they drawn from gaussian?\\n\\n- I would encourage the authors to focus their contribution and not add orthogonal half-baked experiments. For example either develop the forget gate ideas and report on them properly or I recommend not including them at all.\\n\\n- Are the authors certain that only 5 papers from the entire body of scientific literature is relevant to this work?\\n\\nI am slightly leaning to accept this work for workshop submission, provided that the paper is clarified, the contribution focused, and that the differences between all the architectures are better compared (e.g. FLOPS? or Wall clock time? or Parameters?)\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea, and promising experiments, but have flaws\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Paper summary:\\n\\nThis paper proposed a generalization of Residual Network (ResNet)(, which turns out to be an ordinary convolutional network. Based on this idea, it proposed the convolutional network initialization method using ResNet, and the ResNet in ResNet architecture. In addition, it also explored the data-depend forgetting gate.\", \"pros\": \"1. The forgetting gate idea was interesting and substantially different from both the ResNet and ordinary convolutional network. It is worth further explorations. \\n2. ResNet in ResNet (RiR) was an interesting idea. However, since the ResNet Init architecture was questionable (see \\u201cCons\\u201d), building RiR with real ResNet (not ResNet Init) would be more reasonable.\\n3. Performance on Cifar-10/100 were quite good.\", \"cons\": \"1. The proposed architecture was essentially an ordinary convolutional network. Although the insight was interesting, but no clear evidence was provided on why the ordinary network can be better than ResNet. After all, the improved performance of ResNet might be due to the constrained architecture. Making it more flexible as proposed in this paper might be harmful. The underperformance of ResNet Init over ResNet Table 1 demonstrated the concern. \\n2. Initializing ordinary neural networks with ResNet was interesting (Table 5 Left). But the performance for ResNet was not reported. If ResNet Init outperforms ResNet Init, ResNet Init will become not very useful. \\n3. The good performance on Cifar might attribute to the wide architecture not rather than the proposed method.\", \"overall\": \"This paper provided interesting ideas and insights for ResNet. Some experimental results were promising. But the flaws in the method were somehow significant to me, which hindered me from recommending it for acceptance. I think more exploration on the RiR and \\u201cforgetting\\u201d idea is more promising than selling the ResNet Init.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"re: Interesting experiments\", \"comment\": \"Thanks to the reviewer for these comments. We have updated the paper to incorporate the feedback, which is posted here: https://drive.google.com/file/d/0BxX9BAoclX5fNEJTZFIxTU1qZGs/view?usp=sharing\", \"a_summary_of_the_changes_in_response_to_the_review_follows\": [\"We now include an appendix and related work section\", \"We update Section 2 to improve clarity, and add a section in appendix with more detailed implementation of the generalized residual block\", \"We now include information on hyperparameters including initialization used (MSR init)\", \"We now include the exact architectures used and the number of parameters in each in the appendix. ResNet Init and RiR require no additional computation/parameters over CNNs and ResNets, respectively. Difference in number of parameters between CNN and ResNet is due to the projection convolutions when increasing dimensionality.\", \"We focus the paper with additional experiments on the generalized residual architecture and RiR and remove forget gate experiments\"]}", "{\"title\": \"Interesting direction, but missing/incorrect information\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors propose a new way to initialize the weights of a deep feedfoward network based on inspiration from residual networks, then apply it for initialization of layers in a residual network with improved results on CIFAR-10/100.\\n\\nThe basic motivation for this paper is interesting, but as of now there is a lot of missing information and the report feels rather rushed. In particular:\\n\\nThe abstract is inaccurate with respect to the experiments actually performed in the paper. An architecture with the ability to 'forget' is only mentioned without detail towards the end of the paper with a single experiment.\", \"introduction\": \"- 'Residuals must be learned by fixed size shallow subnetworks, despite evidence that deeper networks are more expressive'. \\nThe proposed RiR architecture can use a shallower subnetwork but not a deeper one compared to ResNet, so it doesn't fully fix this issue.\\n\\n- \\\"even though some features learned at earlier layers of a deep network may no longer provide useful information in later layers. A prior of ...\\\" \\nThe mentioned highway networks and its variants (with/without gate coupling etc.) do have the ability to 'forget', and a stack of highway layers can learn subnetworks of various depths. Why is this not mentioned here (perhaps I misunderstood something)?\\nAdditionally the highway networks paper mentions successful training of 'unrolled LSTM' for very deep networks, which are also explicitly used by \\\"Grid-LSTM\\\". These have forget gates. Additionally, an unrolled/Grid LSTM also has two streams along depth similar to what is proposed in Section 2, so I'm not sure how original this basic motivation is. It's okay if it's not original, but connections to existing work must be clear.\\n\\n- \\\"We propose a novel architecture...\\\"\\nInaccurate currently, similar to abstract.\", \"section_2\": \"-From what I can tell, the proposed generalized 2 stream architecture is never actually used. Instead an initialization is used which lets a usual layer implementation behave like the proposed architecture at the beginning of training. This is an incomplete evaluation of the proposal, and makes hard to say how valuable it is.\\n\\nSection 3/4:\\nOverall, apart from relations to highway networks, unrolled LSTM and Grid LSTM, the presented results seem preliminary even for a workshop contribution. This is because while some consistent improvements are shown for the initialization (wide RiR vs. wide ResNet), it's unclear what the reason for this improvement is.\\n\\nOne would expect the motivation 'problems' mentioned in the Introduction to lead to difficulties in optimization (not necessarily generalization). But I doubt that the improved results obtained are due to better optimization, since it is likely that all networks were optimized well (these networks are not too deep). Is it just a purely empirical observation then, that this initialization appears to result in better generalization? If so, this should also be stated very clearly.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
gZ9OMgQWoIAPowrRUAN6
Sequence-to-Sequence RNNs for Text Summarization
[ "Ramesh Nallapati", "Bing Xiang", "Bowen Zhou" ]
In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation. Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et. al. (2015), on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance.
[ "text summarization", "rnns", "work", "problem", "attentional", "rnn", "successful", "machine translation", "experiments", "architecture" ]
https://openreview.net/pdf?id=gZ9OMgQWoIAPowrRUAN6
https://openreview.net/forum?id=gZ9OMgQWoIAPowrRUAN6
ICLR.cc/2016/workshop
2016
{ "note_id": [ "YW9k53A8qILknpQqIK3N", "YW94VGrx7TLknpQqIKZm", "4QygZK2KPFBYD9yOFqj8" ], "note_type": [ "review", "review", "comment" ], "note_created": [ 1458218526308, 1458064654382, 1458138090619 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/78/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/78/reviewer/11" ], [ "~Ramesh_M_Nallapati1" ] ], "structured_content_str": [ "{\"title\": \"Good empirical exploration of seq2seq applied to sentence-summarization\", \"rating\": \"7: Good paper, accept\", \"review\": \"Overall, this paper considers a fairly straightforward application of the seq2seq model to abstractive sentence summarization, with most novel work only hinted at or mostly ignored (almost surely due to the short page limit, but it would be very interesting to pursue in a longer paper). The empirical results are carefully described, and based on the authors' response, seem to be comparable to those of Rush et al., and overall therefore represent a significant boost in accuracy over the Rush model.\", \"some_finer_points\": \"Using the Large Vocabulary \\\"Trick\\\" speeds up training but hurts the abstractive ability of the model. Since the latter is the core focus of this model, it seems worth it to fix this issue. A sampling approach to the full softmax should represent a better solution than the LVT heuristic or the heuristic of extending the target vocabulary with 1-nearest neighbours.\\n\\nIn the words-lvt2k-(2|5)sent model, it is not clear why using 2 sentences is more accurate than 1, but using 5 sentences is less accurate than 2 (do you reverse the input in the encoder?). It would be beneficial to investigate the reason for this by visualizing and analyzing the attention heat maps for the 2 vs 5 sentence models.\\n\\nIt was nice to see that the authors did also consider a more novel hierarchical model based on Li et al. 2015, but it is unfortunate that this approach did not seem to yield better results. Could it be that this approach could actually benefit from using more than two sentences? It's not clear whether this was tried.\\n\\nIt would be useful to see the \\\"src-copy rate\\\" of the gold training data to be able to meaningfully interpret that metric.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Comparison to previous work not possible\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"It's interesting to see that embedding additional annotations such as part-of-speech and NER tags helps the performance.\\nHowever, the evaluation in the paper makes it impossible to compare to previous work. They use the same training data but use a different test set. Why not use the same test set? \\nAlso, the standard metric for summarization is Rouge recall (see the DUC challenge) but the authors chose F1. They also quote previous work but Rush et al. used recall - so it does not sound right to put both results in the same table.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to reviewer 11's review\", \"comment\": \"Our test set sample actually did come from the same test set as that of Rush et al, but it is indeed true that the samples themselves are not identical. The reason is that the authors of that paper have not publicly released their test sample. Regarding your other comment on Rouge recall, we did compare our full-length Rouge Recall numbers to those of Rush et al in rows 11, 12 and 13 of Table 1 in the submitted version (we confirmed with the authors of Rush et al before submission that this is exactly the metric used by them on Gigaword corpus, and not the limited-length Recall that they used on DUC corpus). We also reported our numbers on Full-length Rouge-F1 out of our extra effort to be fair, since full-length recall tends to favor longer summaries and we didn't want to gain an unfair advantage just in case our summaries were longer than theirs. Please note that on Recall only, our model is in fact even better than theirs.\\n \\nAfter the submission, we also followed up our communication with Rush et al authors, and obtained their exact test sample and did additional experiments to do precise apples-to-apples comparison on both recall and F1, and it is indeed confirmed that our proposed models clearly outperform theirs on both counts. Please refer to the updated version downloadable from https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxyYW1lc2huYWxsYXBhdGl8Z3g6MTE5NjdkZGY2NDM3Y2FkMQ for more details on these experiments, but we are also printing the comparison on the same test-set sample below for your convenience:\", \"full_length_recall\": \"Rouge1 Rouge2 Rouge-L\", \"rush_et_al\": \"29.78 11.89 26.97\", \"our_model\": \"32.76 16.17 30.73\\n\\nSincerely,\\nRamesh Nallapati, Bing Xiang and Bowen Zhou.\", \"full_length_f1\": \"Rouge1 Rouge2 Rouge-L\"}" ] }
3QxgDrWGXfp7y9wltPqg
Using Encoder-Decoder Convolutional Networks to Segment Carbon Fiber CT
[ "Daniel Sammons", "William P. Winfree" ]
Materials that exhibit high strength-to-weight ratio, a desirable property for aerospace applications, often present unique inspection challenges. Nondestructive evaluation (NDE) addresses these challenges by utilizing methods, such as x-ray computed tomography (CT), that can capture the internal structure of a material without causing changes to the material. Analyzing the data captured by these methods requires a significant amount of expertise and is costly. Since the data captured by NDE techniques often is structured as images, deep learning can be used to automate initial analysis. This work looks to automate part of this initial analysis by applying the efficient encoder-decoder convolutional network at multiple scales to perform identification and segmentation of defects for NDE.
[ "convolutional networks", "nde", "methods", "material", "data", "initial analysis", "high", "ratio" ]
https://openreview.net/pdf?id=3QxgDrWGXfp7y9wltPqg
https://openreview.net/forum?id=3QxgDrWGXfp7y9wltPqg
ICLR.cc/2016/workshop
2016
{ "note_id": [ "5QzBZYXMRHZgXpo7i3xw", "oVg36YYNjcrlgPMRsB6J", "6XAOr7jP1hrVp0EvsEmJ" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457644970195, 1457570992863, 1457193919840 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/131/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/131/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/131/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Application paper using a deep encoder / decoder convnet for segmenting carbon fiber computed tomography\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper proposes to use an encoder-decoder architecture to perform segmentation in order to automate nondestructive evaluation of materials.\", \"this_is_unfortunately_not_suitable_as_an_iclr_workshop_contributions_for_several_reasons\": \"1- this is a narrow application with no learning or representation contribution nor much generalization potential\\n2- even within the scope of application, the architecture is proposed without any rationale as to the choices made, nor any baseline with simpler methods or architectures.\", \"3__the_results_of_the_proposed_methods_are_presented_in_a_way_that_is_very_hard_to_interpret\": \"\\\"qualitative\\\" results with just 5 images of segmentation and no quantitative evaluation, the quantitative evaluation is given only on simulated dataset. There are no comparisons across variants within the same system to justify choices made or contributions of pieces of the architecture(e.g. one decoder instead of a pair of decoders, to justify the sentence \\\"We found this method [using 2 decoders, one for the image and one for the segmentation] of regularization crucial for training the encoder-decoder network with entire images as training without it would result in severe overfitting of the training set\\\".\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting application, but not paper not novel nor thorough enough\", \"rating\": \"3: Clear rejection\", \"review\": [\"This paper presents a convolutional encoder-decoder architecture for segmenting materials (carbon fiber). The idea is to use the method of Wang et al. (2015) with a couple of small modifications: (1) use nearest neighbor for upsampling at the decoding stage (2) feed in multiple scales of the input image (3) add a reconstructive layer that predicts the input image (in addition to the output layer that predicts the segmentation mask).\", \"The paper trains this model on simulated data. The claims are that (1) on simulated test data it works well (2) the multiscale trick is beneficial (3) the additional reconstruction cost is useful to prevent over-fitting. A few comments:\", \"I think the \\\"snowball training method\\\" would need some clarification.\", \"The usage of simulated data would need more analysis -- it is unclear how faithful the data statistics are to the real dataset, so some comparative images would be good. Describing how the data was generated would be helpful.\", \"There are no comparisons with other methods so it is unclear how good the proposed model actually is in comparison with other alternatives.\", \"Results with/without extra regularization would be helpful, as well as a deeper analysis (metrics such as training and validation errors over time so that we can see the effect of over-fitting etc).\", \"How would the authors train if they had access to some labeled data (which I assume is also a realistic scenario)? There is an obvious way to mix the supervised and unsupervised objectives, but that's not the only way.\", \"All in all, this paper applies some pretty standard tools on a potentially novel set of data, but otherwise falls short in terms of novelty and depth of analysis.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Limited novelty and lack of detailed experimental analysis\", \"rating\": \"3: Clear rejection\", \"review\": \"This work proposes an encoder-decoder convolutional networks for segmenting medical images. Images with multi-resolutions are encoded by a pretrained encoder, and two decoders are employed to perform segmentation and reconstruction, respectively. The model is trained on a synthesized dataset and one result of real image is shown.\\n\\nThe novelty of the work is quite limited (and lack of detailed analysis) since simply two extra features are added on top of the typical encoder-decoder convolutional networks, namely (1) multi-resolution images and (2) two decoders (one for segmentation and the other for reconstruction) as regularization during training. Feature (1) has been known to be helpful for segmentation as demonstrated by [1, 2, 3] and Chet et al. arXiv 2015. Employing multi-resolutions brings about 2% (in meanIOU), but the experimental details are missing, such as how many resolutions are employed (and some ablation analysis of experimenting with multi-resolutions). Feature (2) is claimed to be useful in the paper, while no analysis experiment at all to support the claim (e.g., what will happen in terms of training speed and performance accuracy experimentally if no regularization employed?). Besides, the encoder-decoder convolutional network framework is typical. A few more baselines, such as Segnet, in the experiment are needed for comparison.\\n\\nEmploying deep neural networks (especially convolutional networks) to segmentation is a hot topic, while the lack of cited references in the paper cannot reflect this. The authors should elaborate more about the related work section. For example, [4] is also relevant to the work from the aspect of convolution-deconvolution network for medical images.\\n\\nThe paper also lacks a few analysis experiments to demonstrate the effectiveness of the proposed model. Specifically, (1) the claim that the usage of nearest-neighbor upsampling is helpful for faster convergence during training, (2) the performance gain of the proposed model is unclear (good simulated dataset or model generalization), and (3) what is the reconstruction result (and how good it is), and is it possible to show its effect on preventing overfitting? Even though the dataset for medical images is relatively small, the authors could try to analyze the models on other larger segmentation datasets or could employ some data augmentation, since I think it is crucial to analyze the proposed model carefully. Furthermore, some details are missing, such as how to synthesize the dataset.\\n\\n\\n[1] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. PAMI 2013.\\n\\n[2] G. Lin, C. Shen, I. Reid, et al. Efficient piecewise training of deep structured models for semantic segmentation, arXiv 2015.\\n\\n[3] P. H. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene parsing, ICML 2014.\\n\\n[4] O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
2xwPmERVBtpKBZvXtQnD
RandomOut: Using a convolutional gradient norm to win The Filter Lottery
[ "Joseph Paul Cohen", "Henry Z. Lo", "Wei Ding" ]
Convolutional neural networks are sensitive to the random initialization of filters. We call this The Filter Lottery (TFL) because the random numbers used to initialize the network determine if you will ``win'' and converge to a satisfactory local minimum. This issue forces networks to contain more filters (be wider) to achieve higher accuracy because they have better odds of being transformed into highly discriminative features at the risk of introducing redundant features. To deal with this, we propose to evaluate and replace specific convolutional filters that have little impact on the prediction. We use the gradient norm to evaluate the impact of a filter on error, and re-initialize filters when the gradient norm of its weights falls below a specific threshold. This consistently improves accuracy across two datasets by up to 1.8%. Our scheme RandomOut allows us to increase the number of filters explored without increasing the size of the network. This yields more compact networks which can train and predict with less computation, thus allowing more powerful CNNs to run on mobile devices.
[ "filters", "convolutional gradient norm", "gradient norm", "randomout", "filter lottery randomout", "sensitive", "random initialization", "filter lottery", "tfl" ]
https://openreview.net/pdf?id=2xwPmERVBtpKBZvXtQnD
https://openreview.net/forum?id=2xwPmERVBtpKBZvXtQnD
ICLR.cc/2016/workshop
2016
{ "note_id": [ "L7VjZ5r7BHRNGwArs4mj", "yov343335ir682gwszQx" ], "note_type": [ "review", "review" ], "note_created": [ 1457549323488, 1456640330132 ], "note_signatures": [ [ "~Sander_Dieleman1" ], [ "~Anelia_Angelova1" ] ], "structured_content_str": [ "{\"title\": \"ICLR 2016 paper 123 review 12\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": [\"The paper introduces a heuristic which aims to revive \\\"dead\\\" units in neural networks with the ReLU activation. In such networks, units that are less useful may be abandoned during training because they no longer receive any gradient. This wastes capacity. The proposed heuristic is to detect when this happens and to reinitialize the units in question, so they get another shot at learning something useful.\", \"It's a simple idea that is definitely worth exploring, but I feel the paper needs some work. My main concern is with the experimental evaluation -- although I appreciate that this is a workshop submission about preliminary work, the choice of dataset and model reduce the relevance of the results. The paper would also benefit from proofreading, there were quite a few spelling and grammar mistakes. The second paragraph of section 2 in particular has a lot of redundancy.\", \"simple idea, reasonably clearly explained.\", \"results are reported across large ranges of the hyperparameters (Fig. 3).\", \"the \\\"two datasets\\\" used in the experiments actually seem to be two halves of a single dataset, which is a bit misleading. The authors claim this dataset is well-known, but I don't think it is (at least not in the ICLR community). It is also quite small. I think MNIST would have been a much better choice, since it is the accepted \\\"toy dataset\\\" for experiments like these nowadays. This would make the results much easier to interpret.\", \"the network architecture used for this problem is really small (only two convolutional layers with 20 filters each), and since the point of the heuristic is to reduce the required capacity to solve a given task, evaluating it only on such a tiny model reduces the relevance of the results. I think scaling behaviour is especially important here.\", \"I'm not entirely clear on how the validation was handled, as it is only mentioned that the datasets are split into equal parts for train and test (no separate validation set). I think this is not a huge issue here because results are reported across large ranges of values for the hyperparameters, but of course it is important to be careful with this. It would be useful if this could be clarified in the paper.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"ICLR 2016 paper 123 review 10\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": [\"The paper proposes an approach to re-set convolutional filters that are apparently not being trained well (and randomly reinitialize them)\", \"It proposes a criterion that is based on the gradients propagated to these filters.\", \"The approach is not entirely novel, since it is well known that very low gradients indicate no learning. Regardless, forming the idea to eliminate and reset the filters during the training and evaluating the contributions on the entire performance is the contribution of this work.\", \"The paper is well organized and written clearly.\", \"The approach is motivated well.\", \"The overall approach makes sense.\", \"The advantage is that one can potentially train more compact networks (fewer but more useful filters and less redundancy among them)\", \"The approach itself is a little bit heuristic; also, looking at figure 3, if you cross-validated on the East crater region but tested on the West, there are regions of the parameter space where the gains in one set may be misleading for the other.\", \"Overall the proposed method seems to be useful for the purposes intended.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
oVgon01wpfrlgPMRsB1E
Sparse Distance Weighted Discrimination
[ "Boxiang Wang", "Hui Zou" ]
Distance weighted discrimination (DWD) was originally proposed to handle the data piling issue in the support vector machine. In this paper, we consider the sparse penalized DWD for high-dimensional classification. The state-of-the-art algorithm for solving the standard DWD is based on second-order cone programming, however such an algorithm does not work well for the sparse penalized DWD with high-dimensional data. In order to overcome the challenging computation difficulty, we develop a very efficient algorithm to compute the solution path of the sparse DWD at a given fine grid of regularization parameters. We implement the algorithm in a publicly available R package sdwd. We conduct extensive numerical experiments to demonstrate the computational efficiency and classification performance of our method.
[ "dwd", "algorithm", "data", "sparse", "sparse distance", "discrimination sparse distance", "discrimination distance", "discrimination", "issue", "support vector machine" ]
https://openreview.net/pdf?id=oVgon01wpfrlgPMRsB1E
https://openreview.net/forum?id=oVgon01wpfrlgPMRsB1E
ICLR.cc/2016/workshop
2016
{ "note_id": [ "E8VDZMZp3H31v0m2iDQ8", "WL92zp3pGc5zMX2Kf21p" ], "note_type": [ "review", "comment" ], "note_created": [ 1458060472872, 1458096725965 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/79/reviewer/11" ], [ "~Boxiang_Wang1" ] ], "structured_content_str": [ "{\"title\": \"a potentially interesting paper, but with limited novelty\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes a sparse version of the distance weighted discrimination principle of Marron et al., 2007.\\nIt consists of replacing the l2 regularization by the l1-norm.\\n\\nNovelty\\nThe DWD formulation of Marron et al., 2007 is not well known and has not received much attention. As far as the reviewer knows, there is no sparse version of this formulation. Therefore, we can argue that the paper has some novelty, but in a field crowded with papers about sparsity, this novelty is moderate.\\n\\nClarity\", \"the_paper_is_rather_unclear_for_several_reasons\": [\"the formulation of the linear SVM is not the standard one. Could the authors provide a reference where this formulation appears? It does not seem obvious that it is equivalent to the standard C-SVM (if it is). The formulation is also non-convex with a quadratic equality constraints, which adds to the confusion.\", \"The reviewer could not understand the link between the formulation (2.1) and the DWD formulation of the previous page. Are they equivalent when replacing the l1-norm by the l2-regularization?\", \"Plotting the function V(u) could be helpful to visualize what the loss function is really doing (instead of Fig 1 for instance). It seems that a possible interpretation is simply a smoothed version of the hinge loss.\", \"Significance and Quality\", \"The loss V in the objective 2.1 is smooth. Therefore, we have access to a large literature about composite minimization to solve such problems (see for instance Bach et al. Optimization with sparsity-inducing penalties, 2012) and Section 3 seems to reinvent the wheel.\", \"To conclude, the papers lacks of clarity, has moderate novelty and does not make significant enough contributions. My recommendation for this ICLR venue is reject.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to \\\"a potentially interesting paper, but with limited novelty\\\"\", \"comment\": \"We sincerely thank the anonymous referee for the careful and constructive review of the manuscript. We address the comments and questions in the following. The revised extended abstract can be seen in\", \"http\": \"//arxiv.org/pdf/1501.06066v1.pdf\\n\\nWe deeply appreciate the reviewer for further considerations.\", \"comment_1\": \"the formulation of the linear SVM is not the standard one. Could the authors provide a reference where this formulation appears? It does not seem obvious that it is equivalent to the standard C-SVM (if it is). The formulation is also non-convex with a quadratic equality constraints, which adds to the confusion.\", \"response\": \"First, this paper is the first work to solve DWD based on its loss function, since all the existing DWD algorithms do not work well for high-dimensional data. Second, the loss function does not have second-order derivative, so it cannot be solved by other commonly used algorithms, for example, Newton's method. We solve DWD by reforming a novel algorithm, GCD algorithm, which combines coordinate descent and proximal gradient.\\n\\nOverall, DWD is a SVM-like classification method. In this work, we present the sparse formulation, which makes DWD available for high-dimensional classification. We also derive a novel and efficient algorithm, which is completely different from the existing work.\", \"comment_2\": \"The reviewer could not understand the link between the formulation (2.1) and the DWD formulation of the previous page. Are they equivalent when replacing the l1-norm by the l2-regularization?\", \"comment_3\": \"Plotting the function V(u) could be helpful to visualize what the loss function is really doing (instead of Fig 1 for instance). It seems that a possible interpretation is simply a smoothed version of the hinge loss.\", \"comment_4\": \"The loss V in the objective 2.1 is smooth. Therefore, we have access to a large literature about composite minimization to solve such problems (see for instance Bach et al. Optimization with sparsity-inducing penalties, 2012) and Section 3 seems to reinvent the wheel.\"}" ] }
L7VOzGWB5hRNGwArs4BJ
Autoencoding for Joint Relation Factorization and Discovery from Text
[ "Diego Marcheggiani", "Ivan Titov" ]
We present a method for unsupervised open-domain relation discovery. In contrast to previous (mostly generative and agglomerative clustering) approaches, our model relies on rich contextual features and makes minimal independence assumptions. The model is composed of two parts: a feature-rich relation extractor, which predicts a semantic relation between two entities, and a factorization model, which reconstructs arguments (i.e., the entities) relying on the predicted relation. We use a variational autoencoding objective and estimate the two components jointly so as to minimize errors in recovering arguments. We study factorization models inspired by previous work in relation factorization. Our models substantially outperform the generative and agglomerative-clustering counterparts and achieve state-of-the-art performance.
[ "joint relation factorization", "discovery", "text", "generative", "entities", "arguments", "unsupervised", "relation discovery", "contrast", "previous" ]
https://openreview.net/pdf?id=L7VOzGWB5hRNGwArs4BJ
https://openreview.net/forum?id=L7VOzGWB5hRNGwArs4BJ
ICLR.cc/2016/workshop
2016
{ "note_id": [ "p8j4MO1NVCnQVOGWfpJg", "OM0mgQ0QWSp57ZJjtNrj", "0Yr4A7WwESGJ7gK5tR3v" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457660662452, 1457551869046, 1458063849894 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/87/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/87/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/87/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"This paper describes an elegant and general method for unsupervised relation extraction from text.\", \"rating\": \"7: Good paper, accept\", \"review\": \"In this paper, the authors introduce a new method for unsupervised relation extraction from textual data. In their approach, the relations are treated as latent variables in an auto-encoder like model.\", \"the_proposed_model_is_composed_of_two_parts\": \"(1) an encoder, which predicts a relation between two entities, given a sentence that contains these two entities. This encoder is a feature rich discriminative classifier. In the paper, the authors propose to use a log-linear model with handcrafted features. As the authors point out, any discriminative model can be used, as long as the relation posteriors and gradients can be easily computed.\\n(2) a decoder, which reconstructs the entities-relation triplet. More precisely, given the relation and one of the entity (and nothing else), the goal of the decoder is to predict the other entity. As for the encoder, many different models can be used as a decoder (e.g. matrix factorization based model). The authors propose to use the RESCAL model.\\n\\nThe encoder and the decoder are then trained jointly, by minimizing the prediction error of one entity (given the other one) and by marginalizing over the relations. The authors performed experiments on the New York Times corpus, showing that their approach outperforms a generative model for unsupervised relation extraction.\\n\\nI think that the method described in this paper is interesting, elegant and general (as pointed by the authors, many different models could be used as encoder or decoder). The authors demonstrate on a large dataset (2 millions entity pairs), that their approach is competitive with existing generative models for this task. The paper is clear and well written. I would have liked to see examples of extracted relations and/or an error analysis of the model.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"rating\": \"7: Good paper, accept\", \"review\": \"This paper introduces a clever idea for unsupervised relation extraction (relation clustering) from text: the best relation r (drawn from a fixed set R) between two entities in a particular textual mention is the relation that best makes it possible to recover one entity from the other. The paper presents a couple of variants on this idea, introduces a model (based on a discrete-state variational autoencoder) that implements this idea, and shows solidly state-of-the-art performance on the NYT corpus.\\n\\nI think that the core idea is intuitive but non-obvious, and that the paper evaluates it well. I'd like to see more discussion of the model's performance (i.e., error analysis) and more discussion of how the encoder portion of the model is constructed (mostly: what features does it have access to, and how much does this matter?), but the paper seems sufficient as is for a workshop contribution.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting encoder-decoder approach to unsupervised relation extraction\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes approaching unsupervised, open-domain relation extraction via an encoder-decoder model, trained end-to-end. The encoder is never described very carefully, but is basically a conventional, human-designed categorical features soft-max classifier, from what I can tell. The decoder is then asked to predict one entity given the relation and other entity in a continuous space model: the entities are embedded, and the relations are represented as a matrix. Three models are investigated. The work outperforms previous work by Yao et al. (2011, 2012).\\n\\nI think the general approach is novel, and interesting - the encoder-decoder factorization, and the way the decoder is trained by masking and asking it to predict one entity is novel, good, and opens up a space for others to try things. As such, I think it is an okay workshop contribution, worthy of acceptance.\", \"some_aspects_of_the_paper_are_not_so_good\": \"- The description of the experiment is very vague: re they using the same corpus and filtering as the Yao et al. and subsequent papers. I suspect not, since they say they end up with about 2 million entity pairs, whereas Yao et al. report 2.5 million. Why aren't they using the same corpus? What has been left out? You can't tell. What forms of preprocessing are provided. You can't tell. \\n - No qualitative output from the experiments is given. Other papers in this area typically show examples of the relation clusters induced, and the sets of entity arguments they pick up. Here we are completely in the dark except for seeing an F1 number.\\n - The paper seems to limit itself in comparisons to 2011-era work, and barely considers work since then. That is, looking at just work involving Limin Yao (their chosen point of comparison), this paper:\\n o Shows a comparison to the results of Yao et al. (2011), but doesn't really explain why their results for the methods of Yao et al. (2011) are 10 F1 points below the methods reported by Yao et al. on a \\\"similar\\\" corpus, except for the annotation \\\"(our feats)\\\", where the \\\"feats\\\" used are never explained, unlike in Yao et al. 2011.\\n o Shows a comparison to one method (HAC) used as a baseline in Yao et al. (2012), but doesn't cover the main result of that paper which does 6 F1 points better, presumably meaning that it is about equal to the results of this paper - the sense clustering of that paper could be argued to be quite analogous to what the encoder-decoder of this paper should be motivated to do to perform well.\\n o Except for a token cite at the beginning, there is no discussion of the \\\"universal schema\\\" work of Riedel, Yao, McCallum and Marlin (2013), which many would now regard as the standard go-to comparison for this line of work. Unfortunately, they use a different evaluation metric, so I can't do an easy comparison - but the authors of this paper could have - but they too show results signficantly above those of Yao et al. 2011.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
r8lrDJ89Pf8wknpYt5zq
Understanding Visual Concepts with Continuation Learning
[ "William F. Whitney", "Michael Chang", "Tejas Kulkarni", "Joshua B. Tenenbaum" ]
We introduce a neural network architecture and a learning algorithm to produce fac- torized symbolic representations. We propose to learn these concepts by observing consecutive frames, letting all the components of the hidden representation except a small discrete set (gating units) be predicted from previous frame, and let the factors of variation in the next frame be represented entirely by these discrete gated units (corresponding to symbolic representations). We demonstrate the efficacy of our approach on datasets of faces undergoing 3D transformations and Atari 2600 games.
[ "visual concepts", "continuation", "symbolic representations", "units", "neural network architecture", "learning algorithm", "concepts", "consecutive frames", "components", "hidden representation" ]
https://openreview.net/pdf?id=r8lrDJ89Pf8wknpYt5zq
https://openreview.net/forum?id=r8lrDJ89Pf8wknpYt5zq
ICLR.cc/2016/workshop
2016
{ "note_id": [ "yovVL3AYjsr682gwszRO", "MwV0Wnnljfqxwkg1t7Gw" ], "note_type": [ "review", "review" ], "note_created": [ 1457498613253, 1458078988963 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/171/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/171/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Interesting and novel approach to disentanglement of feature learning - toward symbolic representation learning.\", \"rating\": \"7: Good paper, accept\", \"review\": \"In recent years, many generative models have been proposed to learn distributed representations automatically from data. One criticism of these models are that they produce representations that are \\\"entangled\\\": no single component of the representation vector has meaning on its own. This paper proposes a novel neural architecture and associated learning algorithm for learning disentangled representations. The paper demonstrates the network learning visual concepts on pairs of frames from Atari Games and rendered faces.\\n\\nNovelty - The proposed architecture uses a gating mechanism to select an index to hidden elements that store the \\\"unpredictable\\\" parts of the frame into a single component. The architecture bears some similarity to other \\\"gated\\\" architectures, e.g. relational autoencoders, three-way RBMs, etc. in that it models input-output pairs and encodes transformations. However, these other architectures do not use an explicit mechanism to make the network model \\\"differences\\\". This is novel. The paper claims that the objective function is novel: \\\"given the previous frame x_{t-1} of a video and the current frame x_t, reconstruct the current frame x_t. This is essentially the same objective as relational autoencoders (Memisevic) and similar to gated and conditional RBMs which have been used to model pairs of frames. Therefore I would recommend de-emphasizingthe novelty of the objective.\\n\\nClarity - The paper is well written and clear.\\n\\nSignificance - This paper opens up many possibilities for explicit mechanisms of \\\"relative\\\" encodings to produce symbolic representations. There isn't much detail in the results (it's an extended abstract!) but I think the work is exciting and I'm looking forward to reading a follow up paper.\\n\\nQuality - Based on the above, I would say that this is a high-quality workshop paper in terms of the ideas and definitely of interest to the ICLR audience.\\n\\nPros \\n- Attacks a major problem of current generative models (entanglement) \\n- Proposes a simple yet novel solution \\n- Results show visually that the technique seems to work on two non-trivial datasets\\n\\nCons \\n- Experiments are really preliminary - no quantitative results \\n- Doesn't mention mechanisms like dropout which attempt to prevent co-adaptation of features\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper proposes a method to learn factorial representations. Given two inputs, it learns by reconstructing the second input using the representation of the first one, except that one component is taken from the features of the second input using gating variables.\", \"rating\": \"7: Good paper, accept\", \"review\": \"quality:\\noverall the paper is clearly written and the toy experiments demonstrate the claims. The topic is very relevant to this venue.\", \"clarity\": \"good.\", \"originality\": \"medium. This work is related to Memisevic's work on learning relations between pairs of images. The mechanism to compute the transformation is different, but the paper should comment and discuss this.\", \"significance_of_this_work\": \"although premature as a publication, this work can already bring good discussion on how to learn factorial representations.\", \"pros\": [\"very relevant topic\", \"somewhat novel model\"], \"cons\": [\"missing references\", \"toy nature of the experiments\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
2xwPBxZoQTpKBZvXtQng
Seq-NMS for Video Object Detection
[ "Wei Han", "Pooya Khorrami", "Tom Le Paine", "Prajit Ramachandran", "Mohammad Babaeizadeh", "Honghui Shi", "Jiana Li", "Shuicheng Yan", "Thomas S. Huang" ]
Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed $3^{rd}$ in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).
[ "video object detection", "frame", "clip", "objects", "difficult", "major advances", "object detection", "single image", "methods" ]
https://openreview.net/pdf?id=2xwPBxZoQTpKBZvXtQng
https://openreview.net/forum?id=2xwPBxZoQTpKBZvXtQng
ICLR.cc/2016/workshop
2016
{ "note_id": [ "WL9GgJOEMs5zMX2Kf2Kz", "WL93AQ5rgh5zMX2Kf2K8", "jZ9yVQXrxsnlBG2Xfzj1" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1456700067102, 1457129129279, 1458095625137 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/127/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/127/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/127/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"The authors utilize a reasonable post-processing approach to video detection to improve object detection accuracy, but the paper is not a sufficient contribution to warrant publication.\", \"rating\": \"3: Clear rejection\", \"review\": \"The authors propose a technique to improve video object detection by applying a post-processing technique to single frame detections. The method falls into the standard \\u201ctracking by detection\\u201d paradigm and achieves a boost over single frame detection on the video detection task. The authors achieved 3rd place in the ILSVRC video object detection challenge, scoring 48.7 mAP versus the winners who achieve 67.8 mAP. Tracking by detection is a well studied problem (and the authors give a few references). This approach falls squarely into that line of work but the core object detector is modernized to be a CNN. Novelty is low, it\\u2019s an application of a fairly standard framework to a modern object detector. While the method did ok in the video object detection challenge, it lost to the first place winner by a huge margin (~20 mAP absolute difference). While a few older \\u201ctracking by detection\\u201d papers are given a full submission would require a much more thorough comparison and literature review. For example see Burgos-Artizzu et al. in BMVC13 on \\u201cMerging Pose Estimates Across Space and Time\\u201d who also apply a generalized NMS to video. Of course there exists numerous other such papers. Regardless, as it stands, it is unclear if there is any novel algorithmic contribution. It may be that the post-processing works better than previous approaches, but there are no comparisons to show this is the case. This was definitely a reasonable contribution to the challenge but does not meet the bar for publication.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The paper proposes a simple and reasonable post-processing step for non-max suppression of detections in a video.\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper proposes a dynamic programming based inference scheme for non-max suppression of detections in a video. This allows high-scoring object detections\\nfrom nearby frames to boost scores of weaker detections within the same clip, to improve the final video object detection performance.\\n\\nThe proposed idea has already been used in past work (for example, almost exactly in Finding Action Tubes, Gkioxari et al. CVPR 2015, I am sure there are numerous other papers using the same idea). I don't find the ideas presented in the paper to be at all novel, and dont think the paper makes a contribution substantial enough for publication.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review of Seq-NMS for Video object detection\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This short paper offers a useful, and straightforward, modification to R-CNN style processing to improve video tracking / temporal detection. The idea may have limited novelty, but is not likely to be of broad interest to the ICLR community, since the proposed method leverages rather classic dynamic programming schemes. It does not appear there is a novel approach, e.g., to learn the model end-to-end from data. Unfortunately I cannot recommend acceptance to the workshop program, given that it is rather selective this year.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
wVqzLo88YsG0qV7mtLq7
Generative Adversarial Metric
[ "Daniel Jiwoong Im", "Chris Dongjoo Kim", "Hui Jiang", "Roland Memisevic" ]
We introduced a new metric for comparing adversarial networks quantitatively.
[ "new metric", "adversarial networks" ]
https://openreview.net/pdf?id=wVqzLo88YsG0qV7mtLq7
https://openreview.net/forum?id=wVqzLo88YsG0qV7mtLq7
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAVXGMoqPFx0Wk76TAyB", "D1VglwBBmU5jEJ1zfEAR", "ANYR86k0mhNrwlgXCqYk" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457479611069, 1456623481537, 1457567552330 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/168/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/168/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/168/reviewer/10" ] ], "structured_content_str": [ "{\"rating\": \"3: Clear rejection\", \"review\": \"This paper is a section copied from the authors' full paper http://arxiv.org/pdf/1602.05110v2.pdf. The authors don't change a word.\", \"summary\": \"This paper described generative adversary metric, which is defining score on test and score on sample from two discriminators then choose winner. The score is error of discriminator.\", \"limitation\": \"In partially compare, eg compare to VAE, it requires to obtain a good D for GANs first, it may need to do evolution training to find a good baseline model. Also as D has seen G's sample during GAN training, but has not seen VAE's example. Direct using D's score to compare GAN against other models is unfair.\", \"reject_reason\": \"Nothing different to full paper's section.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"a good and simple way to quantitatively assess GANs\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"review\": \"Up till now, all work on GANs has had no proper metric for quality, and model selection has been via human eyeballing.\\nThe paper proposes a simple metric to compare the performance of GANs. Pairs of GANs are trained independently, and then at test time are pitted against one another (G of GAN1 against D of GAN2 and vice versa). In this validation process, we can look at the ratio of discriminator successes to rank models.\\n\\nThe idea is simple and good. It has been explained clearly. It is very significant for all research in the GAN framework, which has seen a lot of growth.\\n\\nThe only concern I have for the area chair is that there's a chance of double-publishing the same idea -- the full paper is already out, which has been submitted to ICML. The full paper was pushed to arxiv before the deadline of the ICLR workshop http://arxiv.org/abs/1602.05110 .\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"rating\": \"3: Clear rejection\", \"review\": \"This paper introduces a new method to compare generative models especially those trained with GAN framework. The idea is that after training each model independently, we compute the error rate achieved by the discriminator of one model against the generator of the other model, and use this error rate to compare the models.\", \"pros\": \"=====\\n1) This paper tries to solve the important problem of evaluating generative models.\\n\\n2) I think the idea of the paper is interesting for comparing different models trained with GAN framework.\", \"cons\": \"=====\\n1) I don't think this idea is applicable to any other generative model such as VAE. The discriminator of GAN typically is good at detecting sharp edges and local structure, which is why the generator of GAN always learn to generate sharp images. However, as opposed to GAN, the generated images of VAE are often blurry images which contain some global structure. So I think it would be very hard for the VAE generator to fool a GAN discriminator into thinking that the image is real. So it would not be fair to use this evaluation method to compare a GAN model with a VAE one.\\n\\n2) This paper seems to be a copied section of an ICML submission by the same authors with no additional contribution.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
OM0jjYW3BHp57ZJjtNEO
Scale Normalization
[ "Henry Z Lo", "Kevin Amaral", "Wei Ding" ]
One of the difficulties of training deep neural networks is caused by improper scaling between layers. These scaling issues introduce exploding / gradient problems, and have typically been addressed by careful variance-preserving initialization. We consider this problem as one of preserving scale, rather than preserving variance. This leads to a simple method of scale-normalizing weight layers, which ensures that scale is approximately maintained between layers. Our method of scale-preservation ensures that forward propagation is impacted minimally, while backward passes maintain gradient scales. Preliminary experiments show that scale normalization effectively speeds up learning, without introducing additional hyperparameters or parameters.
[ "layers", "scale", "difficulties", "deep neural networks", "improper scaling", "issues", "gradient problems", "careful", "initialization" ]
https://openreview.net/pdf?id=OM0jjYW3BHp57ZJjtNEO
https://openreview.net/forum?id=OM0jjYW3BHp57ZJjtNEO
ICLR.cc/2016/workshop
2016
{ "note_id": [ "P7Vnw4z9NfKvjNORtJjO", "0YrwmZ2pJTGJ7gK5tRW1", "3QxzglN5nsp7y9wltPD0" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457629946962, 1457607123405, 1457631018177 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/109/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/109/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/109/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"scale normalization, not thorough enough\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Appears to be an unthorough set of experiments attempting to keep network weight matrices length preserving. There have been many papers along these lines e.g. arxiv.org/abs/1602.07714 or indeed explicitly length-preserving weights like in arxiv.org/abs/1511.06464\\n\\nIn any case, this looks like very little work on what could be a promising idea. It's just hard to tell from so little data.\\n\\nI'm not sure what the standards are for the workshop papers, I guess they are lower than the conference's by definition. Perhaps this is acceptable.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Review for Scale Normalization\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"A few points about the idea:\\n (a) Depending on the dataset, and minibatch size, E(s) but not be very stable. This you can see in the peaks of the training loss. E(s) should really be over the dataset. If that is expensive than maybe computing a moving average (even if W changes from step to step, it should not drastically change). Regardless, the instability introduced by this normalization scheme is somewhat worrisome.\\n (b) The fact that hurts on the test set, leading to lower score is also a bit worrisome. IMHO what is going one is that the algorithm is to greedy, overfitting the current minibatch early on training, making me think again about E(s) not being estimated properly. Regardless, this points that some important detail is missing from its current form.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"Interesting idea. Although not totally novel as pointed out in other comments, perhaps this new form of scale preservation is better or more efficient, although the results from what appears to be a *single experiment*, on MNIST, makes it hard to pass much judgement.\\n\\nIntuitively, since rescaling is a hard constraint, gradient descent might have quite some trouble adjusting towards the convergence; which might be why in the end the unnormalized model gets a better test score.\\n\\nThe reasoning is interesting but it is not backed up by enough empirical evidence.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
q7kqBkL33f8LEkD3t7X9
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
[ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke" ]
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly, however, when fully trained, the final quality of the non-residual Inception variants seem to be close to those of residual versions. We present several new streamlined architectures for both residual and non-residual Inception networks. With an ensemble of three residual and one pure Inception-v4, we achieve 3.08\% top-5 error on the test set of the ImageNet classification (CLS) challenge
[ "residual connections", "impact", "inception architecture", "performance", "inception networks", "residual", "deep convolutional networks", "central", "largest advances", "image recognition performance" ]
https://openreview.net/pdf?id=q7kqBkL33f8LEkD3t7X9
https://openreview.net/forum?id=q7kqBkL33f8LEkD3t7X9
ICLR.cc/2016/workshop
2016
{ "note_id": [ "q7kOZk5zrc8LEkD3t7r7", "OM0Wyn6kAcp57ZJjtN5R" ], "note_type": [ "official_review", "review" ], "note_created": [ 1457722821433, 1457708161310 ], "note_signatures": [ [ "~Andrew_Rabinovich1" ], [ "~Tom_Sercu1" ] ], "structured_content_str": [ "{\"title\": \"Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning\", \"rating\": \"7: Good paper, accept\", \"review\": [\"The paper shows good results and has interesting insights. It is a bit raw in presentation. Finally, and most importantly, we've moved away from feature engineering with deep learning, but are not doing model engineering. For example:\", \"1) formulated principles of construction nets\", \"avoid bottlenecks on early layers,\", \"spatial aggregation (convolutions) can be done over lower dimensional embedding because adjacent units highly correlated => why bottlenecks inside inceptions.\", \"2) factorization\", \"5x5 factorized with two 3x3, 3x3 everywhere\", \"3x3 factorized with 1x3 and 3x1 in the middle of network\", \"added inception with conv 7x7 and factorized to series of 1x7 and 7x1. It behaves well only for low-dim grids in the end of network.\", \"Ideally, we would have algorithms that help design models for a specific problem and make these decisions from data. Furthermore, inception was engineered for ImageNet, and is likely starting to overfit. It is not clear if all of the design decisions at this stage are actually generalizable to other problems.\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Significant and inevitable work, a bit sloppily and prematurely presented.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"review\": \"This paper presents a combination of the inception architecture\\nwith residual networks. This is done by adding a shortcut connection\\nto each inception module. This can alternatively be seen as a resnet where\\nthe 2 conv layers are replaced by a (slightly modified) inception module.\\nThe paper (claims to) provide results against the hypothesis that adding residual\\nconnections improves training, rather increasing the model size is what makes the difference.\", \"pros\": \"First off, a combination of inception & resnets is kind of incontournable.\\nThis work is presenting an interesting combination of two strong and impactful models,\\nand is in that sense quite significant and (moderately) novel.\\nThe \\\"fair\\\" comparison in terms of computational budget is insightful and I\\ntend to agree with the author's claims. However see the first 2 cons below.\\n\\nMy remarks (count them as cons or suggestions for improvement):\\n+ The resnet-151 is (probably?) still quite a lot deeper than inception-resnet-v2,\\n so it feels too early to conclude that res connetions wouldnt give an edge\\n when making these inception-resnets even deeper.\\n+ I find Figure 3 hard to believe / worthy of more dicussion:\\n in inception-v3 vs inception-resenet-v1 the shortcut connections\\n make a massive difference in terms of convergence time.\\n However for the scaled-up version there's almost no gain from residual\\n connections! That just looks so *very* strange.\\n+ No explanation how inception-resnet-v2 looks like, how big is it exactly?\\n+ A bit more argumentation for the inception-resnet design would be good.\\n To me it seems like several inception blocks for one shortcut would also be\\n an option.\\n+ Not very well-structured, \\\"Model\\\" contains arguments that might be better\\n in \\\"Results\\\" or \\\"Discussion\\\" sections.\\n+ Paper appears to be hastily written, the citation style is confusing, the\\n section \\\"4. Results\\\" seems to be written on a phone?\\n \\\"Firs we compare\\\", \\\"Figure 3 This graph\\\", \\\"newline introduced\\\",\\n+ All combined, I'd say that the paper is worthy of publication but it is a bit\\n premature and looks a bit like \\\"marking territory\\\". At the other hand,\\n this is a workshop paper so this might be fine.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
oVgo1jRRDsrlgPMRsBzY
Learning stable representations in a changing world with on-line t-SNE: proof of concept in the songbird
[ "Stéphane Deny", "Emily Mackevicius", "Tatsuo Okubo", "Gordon Berman", "Joshua Shaevitz", "Michale Fee" ]
Many real-world time series involve repeated patterns that evolve gradually by following slow underlying trends. The evolution of relevant features prevents conventional learning methods from extracting representations that separate differing patterns while being consistent over the whole time series. Here, we present an unsupervised learning method to finding representations that are consistent over time and which separate patterns in non-stationary time-series. We developed an on-line version of t-Distributed Stochastic Neighbor Embedding (t-SNE). We apply t-SNE to the time series iteratively on a running window, and for each displacement of the window, we choose as the seed of the next embedding the final positions of the points obtained in the previous embedding. This process ensures consistency of the representation of slowly evolving patterns, while ensuring that the embedding at each step is optimally adapted to the current window. We apply this method to the song of the developing zebra finch, and we show that we are able to track multiple distinct syllables that are slowly emerging over multiple days, from babbling to the adult song stage.
[ "patterns", "stable representations", "changing world", "proof", "concept", "time series", "representations", "consistent", "songbird", "songbird many" ]
https://openreview.net/pdf?id=oVgo1jRRDsrlgPMRsBzY
https://openreview.net/forum?id=oVgo1jRRDsrlgPMRsBzY
ICLR.cc/2016/workshop
2016
{ "note_id": [ "yovqKDlMySr682gwszNV", "jZ9XjgEQNTnlBG2Xfz7r" ], "note_type": [ "review", "review" ], "note_created": [ 1457666223400, 1457632224788 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/183/reviewer/12" ], [ "ICLR.cc/2016/workshop/paper/183/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"Nice contribution to bird song analysis, less clear contribution to representation learning\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This abstract describes the use of t-SNE to cluster syllables of bird song. The clustering is performed on overlapping windows of the signal, thus connecting them over time and allowing them to evolve as the young bird develops.\\n\\nThe abstract is well written and quite clear. The originality of the method lies more in its use of a new method to analyze birdsong than in its development of a new method for clustering temporal sequences in general. It is not clear that t-SNE is really required here, perhaps other clustering techniques performed over such sliding windows would work just as well on this task.\", \"pros\": \"important problem examined by domain experts, well executed, interesting scientific results\", \"cons\": \"no comparison with other approaches, straightforward extension of existing representation learning techniques\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"ICLR Application paper: using t-SNE to visualize dynamics of songbird vocalizations\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper uses t-SNE for embedding non-stationary bird-song data, by computing\\nembedding of overlapping time windows of a time-series. Smoothness across embedding is \\nachieved by seeding each embedding with the embedding of the previous time window. \\n\\nThe problem of learning dense representations of time series is important and interesting, \\nand the authors are experts in the field of development of songbird vocalization. \\nThe presented approach successfully visualizes interesting data, revealing the formation \\nof songs in developing zebra finch. \\n\\nHowever, this paper is more about applying a well-established method in a standard way, \\nthan about introducing a novel method. There are no quantitative evaluations with competing \\napproaches. As a result, it is not clear what the paper teaches us about methods for learning \\nrepresentations at this point.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
XL9v5ZZ2qtXB8D1RUG6V
Persistent RNNs: Stashing Weights on Chip
[ "Greg Diamos", "Shubho Sengupta", "Bryan Catanzaro", "Mike Chrzanowski", "Adam Coates", "Erich Elsen", "Jesse Engel", "Awni Hannun", "Sanjeev Satheesh" ]
This paper introduces a framework for mapping Recurrent Neural Network (RNN) architectures efficiently onto parallel processors such as GPUs. Key to our ap- proach is the use of persistent computational kernels that exploit the processor’s memory hierarchy to reuse network weights over multiple timesteps. Using our framework, we show how it is possible to achieve substantially higher computa- tional throughput at lower mini-batch sizes than direct implementations of RNNs based on matrix multiplications. Our initial implementation achieves 2.8 TFLOP/s at a mini-batch size of 4 on an NVIDIA TitanX GPU, which is about 45% of the- oretical peak throughput, and is 30X faster than a standard RNN implementation based on optimized GEMM kernels at this batch size. Reducing the batch size from 64 to 4 per processor provides a 16x reduction in activation memory foot- print, enables strong scaling to 16x more GPUs using data-parallelism, and allows us to efficiently explore end-to-end speech recognition models with up to 108 residual RNN layers.
[ "weights", "framework", "gpus", "processor", "batch size", "persistent rnns", "chip persistent rnns", "chip", "recurrent neural network", "rnn" ]
https://openreview.net/pdf?id=XL9v5ZZ2qtXB8D1RUG6V
https://openreview.net/forum?id=XL9v5ZZ2qtXB8D1RUG6V
ICLR.cc/2016/workshop
2016
{ "note_id": [ "lx9ryX6Zvt2OVPy8Cvy7", "r8lj136nnF8wknpYt5jv", "E8VYKO1GKI31v0m2iDP4", "WL9xG0mgkf5zMX2Kf2m5", "GvV1w6WPkF1WDOmRiMQX", "81DGX2k1pU6O2Pl0UVMK", "ZY9AKAGZ0I5Pk8ELfEy5" ], "note_type": [ "review", "review", "comment", "comment", "comment", "comment", "review" ], "note_created": [ 1457133427650, 1457648546224, 1457567094379, 1457651511141, 1457567182836, 1457567231516, 1457630306631 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/70/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/70/reviewer/12" ], [ "~Erich_Elsen1" ], [ "~Erich_Elsen1" ], [ "~Erich_Elsen1" ], [ "~Erich_Elsen1" ], [ "ICLR.cc/2016/workshop/paper/70/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"GPU kernel trick to speed up computation by caching weights\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors presented a GPU kernel trick to speed up computations for RNNs, key points:\\n1] w/ minibatch size of 4, they claim 30x speedup compared to standard GEMM kernels. This is very impressive, such a simple engineering trick can yield 30x speedup!\\n\\n2] One of the arguments to use smaller minibatch sizes is due to memory (i.e., only 12G/24G in modern GPUs). For example, the authors demonstrated very deep RNNs in their paper. The memory argument isn't very strong since you can always cache the activations (quite easily, cheaply and transparently) to CPU memory using DMA. Additionally, a lot of groups also run CPU models for RNNs which have can easily have >128G of RAM per CPU (CPUs are not that much slower than GPUs in the RNN GEMM type ops esp w/ AVX512 + FMA).\\n\\n3] If the memory argument doesn't hold, the other reason to use a smaller minibatch is if it gives better convergence properties (whether theoretical or empirical). In theory, smaller minibatches should give better convergence, but in practice that may not necessarily be true due to the high variance of the gradients in RNNs. The authors didn't show whether the smaller minibatches did indeed give better empirical performance, if large minibatches are required to get state-of-the-art performance, then this would invalidate a lot of motivations to use smaller minibatches.\", \"comment_on_the_very_deep_residual_rnns\": \"The authors also claimed that residual RNNs (i.e., skip connections) make possible optimizations of deep RNNs (i.e., see Table 1); compared w/o the skip connections, the deep (48 layers) of RNNs are difficult to converge. However, the authors did not compare their models to GRUs or LSTMs which are generally regarded as much better than RNNs, and stacking such deep RNNs may not be necessary. For example, many acoustic model and end-to-end speech models use only 3-4 layers for RNNs because deeper networks don't seem to help (see Sak et. al 2014; Bahdanau et. al 2015; and Chan et. al, 2015 for their acoustic models / end-to-end attention speech papers). The authors also didn't show that the deep RNNs out perform a \\\"shallow RNN\\\" (i.e., assuming the same dev/train sets are used, the Deep Speech 2 paper showed much lower WERs, suggesting that the deep residual RNNs are not necessary). However, part of their motivation is that w/ this kernel trick, we can explore much deeper RNNs.\", \"side_impl_details\": \"The authors left out much detail on the implementation of their kernel. This would make it hard for open source groups to implement and/or replicate their work. Would be really nice/cool if the authors were willing to open source their kernel to Theano / Torch / TensorFlow etc...\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Efficient pipelining and GPU synchronization for RNN training for 30X recurrent layer speed up and 10x system level speed up\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Important work in dealing with bottlenecks in training of the recurrent layers of an RNN, which allows for high GPU usage down to small mini-batch sizes of 4. I'm not familiar with other hand written assembler software pipelines and optimized implementations of a global barrier, making it difficult to gauge baseline comparisons to Nervana and CUBLAS (being the one small con), but the gains are impressive and should be reported. 30X on recurrent layers, 10x at system level and 45% of peak theoretical throughput for NVIDIA TitanX GPU.\\n\\nOne question, the implementation is focused on the recurrent computation of a vanilla RNN, but more and more we see GRUs or LSTMs in end to end system, which have more complicated recurrent dependencies. For something like a GRU where reset gates computed from one recurrent computation is applied as a pointwise multiplication on another recurrent computation, do you forsee any problems in your current pipeline strategy and optimized global barrier?\\n\\nOn page 3 you mention performance is much better at layer sizes 2048 or 2560, suggesting advantages of persistent implement as model become deeper and thinner (do you mean wider? as the comparison is to 1152?).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to Reviewer 11\", \"comment\": \"(Responses broken over multiple comments due length limits)\\n\\n[Response to 2] : While we do appreciate the importance of exploring alternative techniques like activation caching in CPU memory and considering other processors types (like CPUs) for performing the RNN computation, we respectfully disagree with both of these points in the context of training RNNs on commodity GPU platforms for the following reasons.\\n \\n Dense GPU systems like ours don't have enough PCIe and CPU memory bandwidth to stream the activations back to the CPU while running at full speed. Specifically, running at approximately 3 TFLOP/s, a single 1152 recurrent layer finishes processing activations with 700 timesteps and mini-batch of 4 (per GPU) in approximately 2.4 milliseconds. We use 8 GPUs in one system that share memory provided by 2 CPUs. Using careful assignment of CPU memory to local NUMA nodes for individual GPUs, we sustain about 2.5 GB/s of CPU DRAM bandwidth per GPU when all 8 GPUs are copying data simultaneously. This is lower than the peak PCIe bandwidth of a single GPU connected to a single CPU because multiple GPUs are sharing PCIe routers, as well as the CPU memory controllers. So transferring the activations to the CPU would take approximately 5.1 milliseconds, causing about a 2x slowdown for the complete system even if the copies and compute were perfectly overlapped. Additionally, we already use the majority of available PCIe and CPU interconnect bandwidth to perform the all-reduce for data-parallel training, which reduces our available PCIe and CPU memory bandwidth for streaming activations even further. \\n \\n Regarding CPU vs GPU performance, Intel's fastest commercially available Xeon processor is currently the E-5 2699v3 processor, which has a peak single precision floating point throughput of approximately 1.3 TFLOP/s. This is approximately 5x lower than a single TitanX GPU's 6.1 TFLOP/s (using base clocks for both the CPU and the GPU) and a 20x difference between the 8 GPUs and 2 CPUs in one server. We would like to note that CPU GEMM implementations are also sensitive to small batch sizes, where the more important metric becomes memory bandwidth. The maximum memory bandwidth of the two Intel processors would be 126 GB/sec vs. 2692 GB/sec for the 8 GPUs. Although the CPUs are less memory capacity constrained, requiring a large batch size per CPU could still limit the maximum amount of data-parallel scaling. So a CPU could also benefit from this technique.\\n \\n Systems that use fewer GPUs or slower RNN implementations may not have these problems, but the motivation of our work is improving the performance of training RNNs, so we do care about factors of 2x and 5x.\\n \\n Finally, this technique reduces the memory required to train an RNN in an absolute sense that is complementary to increasing the memory capacity. For any memory capacity (12GB on a GPU, or 512 GB on a CPU), this technique increases the maximum utterance length or network size that can be trained.\"}", "{\"title\": \"Response to reviewer 12\", \"comment\": \"The main difficulty in implementing GRU and LSTM layers using this technique is larger parameter count (3 or 4x) which requires the layer size be proportionally smaller, which would mean very narrow layers. We think the other technical challenges related to pipelining, barriers and how to map the parameters amongst the SMs for load balancing could be solved. Newer GPUs like Pascal will increase the available amount of registers to store parameters which combined with fp16 storage could increase the GRU and LSTM layer size to something reasonable.\\n\\nThe sentence you are referring to is saying that the performance of traditional gemm libraries (like CUBLAS or Nervana) is much better at layer sizes of 2048 or 2560 compared with 1152, so the advantage of persistent kernels is larger for deep and narrow networks.\"}", "{\"title\": \"Response to Reviewer 11\", \"comment\": \"(Response broken over multiple comments due to length limits)\\n\\n[Response to 3] :\\n One of the main reasons to move to a smaller mini batch size is that it enables greater levels of parallelism, a point which the reviewer seems to have missed. If convergence starts to slow down after a batch size of 1024, then with a mini-batch size of 64 per GPU, this limits data parallel runs to 16 GPUs. With a mini-batch size of 4, it increases the limit to 128 GPUs.\\n \\n We did not have space in the 3-page format to include empirical results showing slower convergence with very large mini-batch sizes. Clearly in the extreme case it must be true that large minibatch sizes converge more slowly than smaller sizes. For example, our models with a mini-batch size of 512 converge in about 20 epochs, and one would not expect a model with a mini-batch size of the entire training set (approximately 6 million samples) to converge in 20 iterations. In practice we have found no difference in epochs to reach the same training accuracy for mini batch sizes between 1 and 1024, but beyond that, we have seen significantly slower convergence. See the following empirical results showing that convergence is slower (in terms of epochs to the same level of performance). Note that we searched over the other Nesterov SGD hyperparameters after changing the batch size.\", \"dev_set_cost_at_10_epochs\": \"\", \"32_mini_batch\": \"48.2\", \"64_mini_batch\": \"48.1\", \"128_mini_batch\": \"48.2\", \"256_mini_batch\": \"48.2\", \"512_mini_batch\": \"48.2\", \"1024_mini_batch\": \"51.5\", \"2048_mini_batch\": \"75.3\"}", "{\"title\": \"Response to Reviewer 11\", \"comment\": \"(Response broken over multiple comments due to length limits)\\n\\n[Response to Comment on very deep residual RNNs] : Note that we did include results on \\\"shallower\\\" RNNs with only 8 and 24 layers, showing a clear trend of improved performance with depth. We did not include the results for smaller numbers of layers to save space (since this result has been demonstrated in prior work), but the trend continues down to 1 layer. In our previous work, we showed that batch normalization improved performance with more than 3-4 layers (up to 7-10 layers), and in this work, we provide preliminary results that the combination of residual connections and batch normalization enables better performance up to 40-50 layers. We agree that it would be interesting to study GRUs or LSTMs in future work, as this idea could also be applied to those architectures. We plan to explore this next.\\n \\nNote that the higher WER rates compared to DS2 are a result of the use of a model with only 800ms of future context (as opposed bidirectional models with unlimited context), and a smaller training dataset (500 hours vs 11,000 hours). The focus on models with less future context is done to make our models most relevant for deployment scenarios that are sensitive to latency. The use of a smaller dataset was primarily due to time constraints with the submission. We hoped that this would be acceptable given the emphasis on preliminary results in the CFP for this workshop.\\n\\n [Response to side impl details] : We also wish that we could share more implementation details about this work, but it was not possible to fit everything into the 3-page workshop format. We tried to include the most essential aspects (caching RNN weights in the register file and performing inter-SM communication with a global barrier), but it should be clear that including all of the details about the assembly level optimizations that went into this kernel would not have fit in the 3 page format. We would be open to suggestions about additional details that could be added, but given the page limit, anything that we add will involve removing something else. We expect to follow on this work with a longer paper and blog post explaining the kernel implementation in enough detail for others to replicate this work.\"}", "{\"title\": \"Implementation paper on RNNs that greatly speed-up small mini-batches\", \"rating\": \"7: Good paper, accept\", \"review\": \"This is a good paper, adequate for the ICLR workshop track. Some comments:\\n\\n-Decreasing the batch size whilst maintaining a decent throughput in terms of examples / s is a crucial component of minibatch-style optimization methods. Thus, this work is important and should motivate the community towards that direction.\\n-Do speed ups hold when decreasing the number of RNNs? Or decreasing their size?\\n-In the data parallelism experiments with multiple GPUs, do the authors use synchronous SGD? If so, the effective batch size increases. Do they observe a degradation of training speed / epoch of data seen?\\n-Are the authors going to release the source code? Given the amount of code sharing and deep learning platforms, it seems like the major contribution to the community would be to do so.\\n-Please define SM\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
mO9mQWp8Rij1gPZ3Ul5q
Multi-layer Representation Learning for Medical Concepts
[ "Edward Choi", "Mohammad Taha Bahadori", "Jimeng Sun", "Elizabeth Searles", "Catherine Coffey" ]
Learning efficient representations for concepts has been proven to be an important basis for many applications such as machine translation or document classification. Proper representations of medical concepts such as diagnosis, medication, procedure codes and visits will have broad applications in healthcare analytics. However, in Electronic Health Records (EHR) the visit sequences of patients include multiple concepts (diagnosis, procedure, and medication codes) per visit. This structure provides two types of relational information, namely sequential order of visits and co-occurrence of the codes within each visit. In this work, we propose Med2Vec, which not only learns distributed representations for both medical codes and visits from a large EHR dataset with over 3 million visits, but also allows us to interpret the learned representations confirmed positively by clinical experts. In the experiments, Med2Vec displays significant improvement in key medical applications compared to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while providing clinically meaningful interpretation.
[ "visits", "medical concepts", "representation", "diagnosis", "visit", "efficient representations", "concepts", "important basis", "many applications", "machine translation" ]
https://openreview.net/pdf?id=mO9mQWp8Rij1gPZ3Ul5q
https://openreview.net/forum?id=mO9mQWp8Rij1gPZ3Ul5q
ICLR.cc/2016/workshop
2016
{ "note_id": [ "71Bozj5zOiAE8VvKUQRy", "GvV10KXRBi1WDOmRiMk4", "YW9joWo7zsLknpQqIKZ1", "71BozgLgnuAE8VvKUQRG", "L7VjEWkOpTRNGwArs4jX" ], "note_type": [ "comment", "review", "comment", "comment", "review" ], "note_created": [ 1457765441718, 1457647087684, 1457856096308, 1457765128918, 1457647026687 ], "note_signatures": [ [ "~Edward_Choi1" ], [ "ICLR.cc/2016/workshop/paper/161/reviewer/12" ], [ "~Edward_Choi1" ], [ "~Edward_Choi1" ], [ "ICLR.cc/2016/workshop/paper/161/reviewer/10" ] ], "structured_content_str": [ "{\"title\": \"General response to the reviewers' comments\", \"comment\": \"We\\u2019d like to thank the reviewers for the insightful comments.\", \"some_of_the_comments_could_be_addressed_by_the_extended_version_in_arxiv_as_we_mentioned_in_the_paper\": \"http://arxiv.org/abs/1602.05568\\nDue to lack of space, we had to make compromises with details.\"}", "{\"title\": \"Good application paper about a bag-of-words model with temporal dynamics\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents a simple two-layer neural network for modeling bag-of-word representations with temporal dynamics, applicable to sequences of related documents. The end-to-end learning algorithm consists in optimising the sum of two losses, the cross-entropy for co-occurrences of words in one document, and the cross-entropy for predicting words in a document appearing at a different time.\\n\\nInstead of words, the authors use medical codes (with a vocabulary of about 29k codes) in consecutive patient records, and demonstrate the applicability of their model to predicting medical codes records, using a set of 3.3M visits for 550k patients, and demonstrate that their model outperforms non-temporal models such as stacked-autoencoders, skip-grams or Glove vectors (as well as a plain sum of one-hot representations). When predicting the codes of the next visit, they achieve a recall of about 0.76 to 0.77 at 30 codes.\\n\\nThe paper is well written but some areas remain unclear.\\n\\n1) Why does loss function (1) and figure 1 suggest that codes from visit V_t are used interchangeably to predict codes from visists V_{t-w}, V_{t-w+1}, ..., V_{t-1}, V_{t+1}, ..., V_{t+w}? The application is to predict V_{t+1} from V_t.\\n\\n2) What are the demographic data used by the authors (vector d_t)?\\n\\n3) If I am not mistaken, the fact the one-hot vectors perform as well as the other methods seems to suggest that there is a strong stationarity in the consecutive visits, on which the authors should elaborate. Similarly, the performance of skip-grams is surprisingly bad compared to the other methods.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Authors' response to the comments\", \"comment\": \"Thank you for your comments and suggestion.\\n1) This is a good suggestion. We did not see improvement when using a larger context window for training visit representation, and some changes along this line could alleviate this issue. \\n2) We used age, gender and ethnicity for the demographic information, as mentioned in the extended version. Defining the demographic information as an input to f_V is a correct statement. We will revise this part in the final version.\\n3) X-axis of figure 2 was an oversight at the last moment. The correct labels are \\u201cSize of the code representation\\u201d, \\u201cNumber of training epochs\\u201d, \\u201cSize of the visit representation\\u201d, \\u201cSize of the context window for training visit representation\\u201d. We will correct this in the final version.\\n4) The interpretation part is included in the extended version, but due to lack of space we could not put it in 3 pages.\"}", "{\"title\": \"Authors' response to the comments\", \"comment\": \"Thank you for your comments and suggestion.\\n1) v_t, the visit representation at time t is trained to predict the codes from neighboring visits V_{t-w}, V_{t-w+1}, \\u2026, V_{t+w}. The codes of V_t themselves are used to generate the visit representation v_t. The application where we use v_t to predict the codes in V_{t+1} is for evaluation purposes. We do not learn visit representations only for that prediction task. In the extended version, we conduct another prediction task for additional evaluation. \\n2) We used age, gender and ethnicity as mentioned in the extended version. \\n3) Stationarity [not to be confused with stationary random processes] is a keen observation. Patients rarely go through drastic changes in a short time window. That was the basis of our assumption that neighboring visits should be predictive of each other. Skip-gram seems to learn slowly compared to GloVe because it can only use local co-occurrence information. Med2Vec alleviates this problem by leveraging neighboring visits.\"}", "{\"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents an interesting application of embedding learning algorithms to the Healthcare domain. It introduces a model that can jointly learn embeddings of two healthcare concepts: medical codes and patient visits. The set of all medical codes is analogous to words or the vocabulary of a language, and a patient visit is defined as a subset of all medical codes, i.e. a bag-of-words.\\n\\nThe paper is clear and well written. While the model used in this paper is not original, it is a good start for learning representations in the healthcare domain.\", \"major_issues\": \"1- I find the model used for learning visit representations a bit problematic. The prediction given a specific visit representation v_t is provided with several (in fact 2*w) different (bag-of-word) targets simultaneously, each for a different neighbouring visit. I believe this would cause some kind of \\\"unlearning\\\", where the model is asked to predict different things for the same input. \\nA better model could be to use a different softmax for each neighbouring visit target, or aggregate those targets (e.g. average or AND) and ask the model to provide one single prediction per input.\\n\\n2- There is some important information missing. For example, I cannot find a definition of the demographic information d_t. In fact, using this information should be defined as input to the function f_V.\\n\\n3- Figure 2 has 4 different sub-plots, all with the same x-axis but with different ranges. It is not clear what the authors are trying to show here, and why not combining all of them in one plot.\\n\\n4- Claims about better interpretability of the representations are not supported by experimental result.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ZY9xQwqKZh5Pk8ELfEzD
Neurogenic Deep Learning
[ "Timothy J. Draelos", "Nadine E. Miner", "Jonathan A. Cox", "Christopher C. Lamb", "Conrad D. James", "James B. Aimone" ]
Deep neural networks (DNNs) have achieved remarkable success on complex data processing tasks. In contrast to biological neural systems, capable of learning continuously, DNNs have a limited ability to incorporate new information in a trained network. Therefore, methods for continuous learning are potentially highly impactful in enabling the application of DNNs to dynamic data sets. Inspired by adult neurogenesis in the hippocampus, we explore the potential for adding new nodes to layers of artificial neural networks to facilitate their acquisition of novel information while preserving previously trained data representations. Our results demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
[ "dnns", "remarkable success", "contrast", "biological neural systems", "capable", "limited ability", "new information", "trained network" ]
https://openreview.net/pdf?id=ZY9xQwqKZh5Pk8ELfEzD
https://openreview.net/forum?id=ZY9xQwqKZh5Pk8ELfEzD
ICLR.cc/2016/workshop
2016
{ "note_id": [ "VAVwRrNJ2Tx0Wk76TAQ7", "p8jOLkwwYSnQVOGWfpkG", "OM0WY6zK4ip57ZJjtN5Y", "gZ9BMZ87QtAPowrRUAK2", "Qn8YNovLBIkB2l8pUYAl", "ROV4j1QAWCvnM0J1Ip0q" ], "note_type": [ "review", "comment", "review", "comment", "review", "comment" ], "note_created": [ 1457647327945, 1458242237061, 1457699488600, 1458242092308, 1457731497707, 1458242395234 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/129/reviewer/10" ], [ "~Timothy_Draelos1" ], [ "ICLR.cc/2016/workshop/paper/129/reviewer/12" ], [ "~Timothy_Draelos1" ], [ "ICLR.cc/2016/workshop/paper/129/reviewer/11" ], [ "~Timothy_Draelos1" ] ], "structured_content_str": [ "{\"title\": \"An interesting idea for adapting neural network models to new input data. The research topic is particularly relevant in the light of the large neural nets that have been trained recently.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors consider the problem of dynamically adapting a neural network to new input data (e.g. new classes), for which the required features might not have been learned. A neurogenesis method is proposed, that progressively introduces new neurons to the model as more data is presented to the network.\\n\\nThe method is motivated from a cost perspective, by explaining that adapting an existing model to new data is cheaper than storing the previous data and training a new model as soon as new data is being observed. The method and motivations are clearly explained and summarized.\\n\\nThe considered MNIST deep autoencoder model with sequentially introduced digit classes is a reasonable choice for testing, although the retraining costs are probably becoming more important when considering more complex models such as large convnets.\\n\\nA technique called \\\"intrinsic replay\\\" is applied in addition to the neurogenesis process. It seeks to generate examples that are similar to those observed previously, and feeding them to the neural network in addition to the newly observed data.\\n\\nThe properties of neural network learning dynamics with time-dependent data distributions is still an open question, that has not been very extensively studied, but a highly relevant one.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Responses to weaknesses.\", \"comment\": \"The authors thank this reviewer for the positive comments and the identification of a couple weakness of our work being the lack of testing of hyperparameters and the lack of another dataset beyond MNIST. It is true that we have not rigorously tested our hyperparameter values, so we do not fully understand the sensitivities to these parameters nor do we know whether we have reached optimal performance. We intend to perform additional experiments to resolve this issue. Regarding additional datasets to use for this work, we are preparing to apply neurogenic deep learning to the CIFAR-10 dataset and hope to present results in the poster presented at ICLR. We will revise the paper to include references related to biological connections of our work.\"}", "{\"title\": \"Interesting heuristic idea\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Interesting idea about how to increase the representation power of a DNN as we receive new (different) data. The paper is brief and clear which is welcome. The weakness of the paper is the different heuristics (hyper parameters ) that use in order to decide when to increase the DNN and the \\\"lower learning rate\\\" to maintain stability in the old weights. The influence of these parameters has not been tested although. Moreover would be interesting to use a different dataset beyond the mnist that use to be use to \\\"sanity check\\\".\\n\\nThere are several sentences like \\\"This step relates to the notion of plasticity in biological NG\\\" that would need some reference.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clarifications, responses to 3 concerns\", \"comment\": \"This reviewer raises some concerns of the paper regarding our communication of: 1) transfer learning (TL), 2) intrinsic replay (IR), and 3) control experiments. We will amend the paper in line with the following comments.\\n\\n1) On further reflection, we believe our experiments, as well as the application of our work, is related more to Online Learning (OL) than to TL. Our objective is to adaptively learn to represent all new presented data instances from old and/or new data classes. Our experiments demonstrate this in batch form by presenting all data instances in a given class, but it is just as applicable to a streaming scenario. We are aware of DNNs successes in transfer learning, but our focus has been on situations where that isn\\u2019t the case. DNNs have definite advantages in transfer learning problems, with their general-purpose feature detectors at shallow layers of a network, but our goal is to identify inputs that a trained network finds difficult to represent, whether from a different, new class or not. In this regard, we are addressing OL and have revised the paper accordingly.\\n\\n2) The choice of using a multivariate Gaussian distribution to represent top layer codes is an initial conjecture and the subject of a separate research effort, but it worked well in support of our neurogenesis work. Before neurogenesis is performed, data samples from each old data class are generated and used in the neurogenesis process AND in updating the distribution of the top layer codes after neurogenesis. We revised the paper to include information related to the previous sentence.\\n\\n3) Regarding the experimental control networks, there is one control network for the neurogenesis network when using IR (NG+IR) and a different control network for neurogenesis network when not using IR (NG). We first train a control network using all training samples of digits 1 and 7, starting with random weights. This network is the same size as the one created during neurogenesis. Then, to perform OL, all samples from a single, new digit (first 0, then 2, then 3, then 4, then 5, then 6, then 8, and finally 9) are presented and training occurs for all weights in the autoencoder for a fixed number of epochs. In the paper, the network (NG) trained with neurogenesis, but not with IR, is also considered a control network for the NG+IR network. The paper has been revised to clarify the experiments.\"}", "{\"title\": \"Paper on interesting heuristic idea but with experimental details missing.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents an algorithm for growing the representational capacity of an autoencoder (AE) network in a data-dependent way. The idea is interesting, but the presentation is rough (as might be expected for a workshop submission). For example, see the several below sections:\\n\\n> DNNs are \\u2026 [not] well suited for transfer learning (TL)\\n\\nActually, many of the results shown in hundreds of papers over the last few years have been built on the effectiveness of transfer learning using DNNs.\\n\\n> Samples from old classes are generated via hippocampus- inspired \\u201dintrinsic replay\\u201d (IR) by retrieving a high-level representation through sampling from the multivariate Normal and Cholesky decomposition of the top layer of the full encoder network and then leveraging the full decoder to reconstruct new data points from that previously trained class.\\n\\nPlease fill in additional details here. Is a multivariate Gaussian distribution being fit to the top layer code? If so, how is it updated to account for the new units, particularly if the old data is not used? Is there any reason to suspect a Gaussian distribution is remotely a good fit to the layer code?\\n\\n> Control 1 (TL+IR) - an AE trained first on the subset digits 1 and 7 and then retrained with one new single digit at a time with standard TL,\\n\\nWhat is \\u201cstandard TL\\u201d?? There is not a single standard transfer learning setup. More details need to be provided about what is actually happening in the Control experiments; without them the results are hard or impossible to interpret.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Response to review\", \"comment\": \"The authors thank this reviewer for noting the positive contributions of our work. We do intend to explore our basic neurogenic deep learning ideas applied to convolutional neural networks.\"}" ] }
D1VDjyJjXF5jEJ1zfE53
Revise Saturated Activation Functions
[ "Bing Xu", "Ruitong Huang", "Mu Li" ]
In this paper, we revise two commonly used saturated functions, the logistic sigmoid and the hyperbolic tangent (tanh). We point out that, besides the well-known non-zero centered property, slope of the activation function near the origin is another possible reason making training deep networks with the logistic function difficult to train. We demonstrate that, with proper rescaling, the logistic sigmoid achieves comparable results with tanh. Then following the same argument, we improve tahn by penalizing in the negative part. We show that ``penalized tanh'' is comparable and even outperforms the state-of-the-art non-saturated functions including ReLU and leaky ReLU on deep convolution neural networks. Our results contradict to the conclusion of previous works that the saturation property causes the slow convergence. It suggests further investigation is necessary to better understand activation functions in deep architectures.
[ "activation functions", "tanh", "revise", "saturated functions", "logistic sigmoid", "hyperbolic tangent", "centered property", "slope", "activation function", "origin" ]
https://openreview.net/pdf?id=D1VDjyJjXF5jEJ1zfE53
https://openreview.net/forum?id=D1VDjyJjXF5jEJ1zfE53
ICLR.cc/2016/workshop
2016
{ "note_id": [ "p8jJ1gJvOcnQVOGWfpYL", "GvVGgXNrJU1WDOmRiM08", "XL9ZAq22LsXB8D1RUGDZ", "yovVNL4Opur682gwszQj", "MwVLGlnxNCqxwkg1t7Jp" ], "note_type": [ "official_review", "review", "review", "comment", "comment" ], "note_created": [ 1457481105739, 1458070725921, 1457437036269, 1457480146992, 1457483296999 ], "note_signatures": [ [ "~Dmytro_Mishkin2" ], [ "ICLR.cc/2016/workshop/paper/165/reviewer/10" ], [ "ICLR.cc/2016/workshop/paper/165/reviewer/11" ], [ "~Bing_Xu1" ], [ "~Bing_Xu1" ] ], "structured_content_str": [ "{\"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors study weaknesses of the saturated activation functions (TanH and Sigmoid) and propose ways of improving it.\", \"in_first_part_of_paper_authors_propose_to_fix_difficulties_of_training_sigmoid_networks_by_one_of_two_ways\": \"a) Simply apply separate learning rates to the different layers: starting with 1 for linear classifier, multiply each previous lr by 4 and correct the weight matrices the opposite way. This could be seen as analogy of (He et al., 2015) correction of (Glorot et.al, 2010) formula but for Sigmoid networks instead of ReLU.\\nThe idea is simple - to compensate vanishing gradient by proportionally increasing learning rate. As far as I checked on 4-layer network, it works, but seems rather impractical for deep network, i.e. correction coefficient for 11-layer network would be 4^11 = 4194304.\\nHowever, the idea of layer-wise adjustment of learning rate potentially could help is other situations, not necessary with sigmoid. \\n\\nb)rescale and shift sigmoid to so called sigmoid*(x) = 4*sigmoid(x) - 2.\\nThis solution is much simpler, but I believe, such function cannot be called sigmoid, rather tanh. Reason:\\n sigmoid*(x) = 4*sigmoid(x) - 2 = 4/(e^-x +1) - 2 = (4 - 2e^(-x) - 2 ) / (e^-x + 1)= 2 (1 - e^-x)/(e^-x + 1) = 2*tanh(x/2).\\n\\nIn the second part of the paper authors propose \\\"leaky\\\" version of TanH, showing that it could compete with ReLU activation.\", \"it_could_be_interesting_to_see_how_other_relu_family_inspired_variants_of_tanh_would_perform\": \"a)rectified TanH = 0, if x < 0\\nb)Randomized Leaky TanH in RReLU fashion\\nc)Parametric Leaky TanH.\\n\\nI believe that paper with such evaluation will help future research in saturated activation functions.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This is a very badly written paper and there are some mistakes. The idea presented in the paper are not really suprising. This paper requires a serious and much better write-up.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This is a very badly written paper. There are several typos and mistakes. I will not list them, I recommend the authors to proof-read the paper a few more times. On the other hand, those mistakes make the paper difficult to follow, although the ideas presented there are not very complicated. I think this paper requires a better write-up.\\n\\nSection 2 is very confusing and ambiguous. Where does w^{(l)} come from? It is not define in the text. \\nI really can not see how you go from Eqn 3 to Eqn 4.\\n\\nTaylor approximations in Eqns 7,8 and 9 are done around 0. You should explicitly state that in the text. Eqn 7 is wrong.\\nEqn 10 is basically 2*tanh(x).\\nThere is a discontinuity between the ideas presented in Section 2 and Section 3. \\n\\nWhy the results of Sigmoid on Table 1 are N/A. If you are not going to put the results there, why did you put this there?\\nIt seems like still leaky-tanh performs slightly worse than leaky-ReLU. Why would one prefer leaky-tanh over leaky-ReLU for feedforward networks? The conclusion provided in the paper is obvious and really not surprising. These ideas could be better analyzed and investigated.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The authors try to understand why deep networks using sigmoid non-linearity are very hard to train and often never converge. They propose a scaled version of sigmoid based on their reasoning for the failure of sigmoid and experimentaly show that proposed activation function can be used to train deep networks. The authors also show that even saturating activation functions being \\\"Leaky\\\" improves the performance of networks to same level as that of non-saturating activation functions.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors claim that under certain assumptions on the variance of pre-nonlinear output and the gradient, the initialization method from [Glorot&Bengio,2010] can be recovered when the derivative of the activation equals to one. Via Taylor expansion of the activation functions they show that sigmoid violates this condition.\\n\\nI do not understand the statement \\\"Clearly, when x is around 0 Sigmoid will make gradient vanishing if we use same learning rate in each layer.\\\" I believe the derivative of sigmoid is non-zero especially around zero and close to zero else where. I believe gradient vanishing occurs when the activation function is pushed into saturated regime. \\n\\nThe proposed version of sigmoid (sigmoid*) can be seen as a version of tanh function with larger non-saturated regime and steeper gradient. Similar activations were proposed in past (sorry for not being able to provide a reference at this moment). I encourage authors to look at the hidden responses of the network trained using sigmoid*, they might be operating in non-saturated regime. I also don't see how the proposed activation is equivalent to using different learning rates for different layers?, unless combined with some specific initialization scheme.\\n\\nRectifiedTanh function, which is a specific case of proposed leaky tanh, was shown to perform as good as Relu in http://arxiv.org/pdf/1506.08700v1.pdf. I believe that the leaky nature of proposed activation helps, but the main jump in performance might be due to the rectification nature of the function.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"RE: Reviewer 11\", \"comment\": \"Thanks for reviewing.\\n \\n1> \\u201cI do not understand the statement \\\"Clearly, when x is around 0 Sigmoid will make gradient vanishing if we use same learning rate in each layer.\\\" \\u201c\\n \\nBy doing z-score normalization on input data, which is commonly used in training neural networks, we normalize the data to have mean 0 and variance 1. Assume x is near 0, according to Equ(6), each sigmoid layer will reduce the variance to (1/4)^2 that of the previous upper layer, which is 1/16. For a network with $l$ sigmoid layers, the bottom gradient variance is (1/16)^l, e.g. for a network with 33 sigmoid layers in our experiment, if the variance of the top layer gradient is 1, the bottom layer gradient variance will be: 1.8367099231598242e-40 which is very close to 0. \\n\\n2> I believe gradient vanishing occurs when the activation function is pushed into saturated regime. The proposed version of sigmoid (sigmoid*) can be seen as a version of tanh function with larger non-saturated regime and steeper gradient. Similar activations were proposed in past (sorry for not being able to provide a reference at this moment)\\n \\nIt is widely accepted that when the neuron is pushed into a saturated regime, the gradient will suffer vanishing problem. In our paper, we propose another possible reason for this vanishing problem. \\nNote that for all these variance based initialization methods, each neuron is initialized independently around 0. Sigmoid networks fail to converge at the very beginning of the training procedure. Hardly all the neuron can be pushed into their saturated regime in this case. \\n\\nIndeed, compared to tanh, sigmoid* has a larger non-saturated regime, but with an higher order of x^3 which is neglectable when x is near 0. (see the Taylor expansion. It remains interesting how one can construct an better activation function incorporate such property.) The conclusion we try to make in this paper is that under these variance based initialization methods, the gradient when x near 0 being not 1 is the problem causing failure of the sigmoid function. We also test the activation function 2 * sigmoid*, and it also fails to converge. Note that this function has a even larger non-saturated regime, but violate the precondition that gradient equals 1 when x is near 0.\\n\\nWe are looking forward to the references about using activation function with larger non-saturated regime and steeper gradient.\\n\\n\\n3> I encourage authors to look at the hidden responses of the network trained using sigmoid*, they might be operating in non-saturated regime. \\n\\nThe main contribution of sigmoid* is that point to gradient vanishing reason and solution joint in initialization and optimization (learning rate in each layer). We can definitely use sigmoid activation function but change initialization & learning rate according to Equ(4) and Equ(6) as it is equivalent.\\n \\n4> I also don't see how the proposed activation is equivalent to using different learning rates for different layers?, unless combined with some specific initialization scheme.\\n \\nBasically, this comes from the commutative property of multiplication.\", \"forward_pass\": \"Recall Equ(4), for each layer $l$, $Var[y^l] = n_l Var[w^l] diag(f\\u2019(y^{l-1})) Var[y^{(l-1)}] ] diag(f\\u2019(y^{l-1}))$. If we use sigmoid, the top output variance will be $1/16$ of previous layer. In Sigmoid^* activation function, it is:\\n \\n$Var[y^l] = 16 * n_l Var[w^l] diag(f\\u2019(y^{l-1})) Var[y^{(l-1)}] ] diag(f\\u2019(y^{l-1}))$. \\n \\nRecall the Xavier initialization method, it is equivalent to initialize w as sqrt(16 * original term), which is multiply 4 for each weight layer\\u2019s initialization. According to Equ(6), for $l$th sigmoid layer, this activation is equivalent to multiply 4^l to the weight term before sigmoid activation.\", \"backward_pass\": \"Recall Equ(6). Similarly, we can derive the scale 4^l in each gradient term. This scale can be communicated to learning rate term, which make learning rate multiply by 4^l.\\n \\n5> RectifiedTanh function, which is a specific case of proposed leaky tanh, was shown to perform as good as Relu in http://arxiv.org/pdf/1506.08700v1.pdf. but the main jump in performance might be due to the rectification nature of the function.\\n\\nI have tested the Rectified Tanh function, the performance is slightly worse than ReLU, and far worse than Leaky ReLU. Rectified Tanh also breaks the symmetry of Tanh. We don\\u2019t make a conclusion that this kind of breaking is the reason, but our empirical experiments show the result. We raise an interesting question in the end of this paper: \\u201cHow does the positive part (on [0, +\\u221e)) and the negative part (on (\\u2212\\u221e, 0]) of the activation function affect the performance of the network?\\u201d We believe this question requires some more theoretical explanation but not empirical conclusion.\"}", "{\"title\": \"RE: Dmytro\", \"comment\": \"Thanks for reviewing.\\na> \\nI have tested 8 layers and there is no problem. Recall IEEE-754 standard (https://en.wikipedia.org/wiki/IEEE_floating_point), large/small number representation is not a problem, but may not be accurate when doing multiplication for a very small number and a very large number.\\n\\nAlso, a heuristic learning rate factor \\\\alpha = ||w||_f / ||g||_f is helpful to train 8 layers sigmoid network. \\n\\nb> \\n\\\"Sigmioid*\\\" is simply a name for convenient scale each layer\\u2019s learning rate and initialization for Sigmoid activation without multiply a verge large number, but multiply 4 layer by layer.\", \"the_key_idea_is\": \"We need to preserve variance of output feature and gradient. The Sigmoid* fix sigmoid\\u2019s feature/gradient vanishing problem by rescaling. Again we can definitely use standard Sigmoid activation function but change initialization & learning rate according to Equ(4) and Equ(6) as it is equivalent. (commutative property of multiplication)\\n\\n\\nc> \\nYes empirically run a lot of experiments are helpful. However I think current experiment is able to demonstrate the problem. \\n\\nI have RRTanh, PTanh\\u2019s result, which is similar to ReLU case in http://arxiv.org/abs/1505.00853.\\n\\nI think we need to focus more on theoretical understanding in the question \\u201cHow does the positive part (on [0, +\\u221e)) and the negative part (on (\\u2212\\u221e, 0]) of the activation function affect the performance of the network?\\u201d\"}" ] }
1WvOZJ0yDTMnPB1oinGN
Learning Retinal Tiling in a Model of Visual Attention
[ "Brian Cheung", "Eric Weiss", "Bruno Olshausen" ]
We describe a neural network model in which the tiling of the input array is learned by performing a joint localization and classification task. After training, the optimal tiling that emerges resembles the eccentricity dependent tiling of the human retina.
[ "retinal tiling", "model", "visual attention", "neural network model", "tiling", "input array", "joint localization", "classification task", "training", "optimal tiling" ]
https://openreview.net/pdf?id=1WvOZJ0yDTMnPB1oinGN
https://openreview.net/forum?id=1WvOZJ0yDTMnPB1oinGN
ICLR.cc/2016/workshop
2016
{ "note_id": [ "P7VnBoOO2cKvjNORtJrl", "p8jEvwpENUnQVOGWfp28", "p8jOnGz9EInQVOGWfpkx", "VAV8W9MZjcx0Wk76TAEE", "K1VgNmm8Di28XMlNCVAW" ], "note_type": [ "review", "review", "comment", "comment", "review" ], "note_created": [ 1457666977623, 1457964420921, 1458256478753, 1458254815933, 1457631832009 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/163/reviewer/11" ], [ "ICLR.cc/2016/workshop/paper/163/reviewer/10" ], [ "~Brian_Cheung1" ], [ "~Brian_Cheung1" ], [ "ICLR.cc/2016/workshop/paper/163/reviewer/12" ] ], "structured_content_str": [ "{\"title\": \"Novelty, but needs basic evaluation metrics\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The authors propose to learn a grid of filters with learnable mean and\\nvariance. They provide preliminary experiments on the translated cluttered\\nMNIST dataset. They motivate their approach with an analogy to tiling in the\\nhuman retina. Empirically, this approach is motivated by recent successes \\nusing attention in deep learning.\\n\\nWhile the approach described in this paper is sensible, there is a major\\nproblem in that it is missing basic evaluation on their MNIST\\ntask. Even if the authors do not compare to previous methods, they should\\nreport some basic heldout validation/test performance. On the more qualitative\\nside, the example given seems quite compelling, but the reader is left to guess\\nif only a few examples look like this (a basic overlap metric would help). The\\nanalogy to the human retina is certainly interesting, but it is unclear how\\nthis somewhat toyish model could \\\"discover the optimal tiling of retina .. \\nin a data driven manner\\\".\", \"other\": \"It would be sensible to cite the following papers for attention\\n Gregor et al. DRAW: A Recurrent Neural Network For Image Generation, 2015\\n Ba et al. Multiple Object Recognition with Visual Attention, 2015\\n\\nSome of the equations (4)-(10) seem to be missing temporal indicies.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting, but weak evaluation\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes a visual attention model for images. Attention is modeled as a grid of gaussian filters as in the DRAW model. In addition to the gaussian parameters, the authors also propose to learn the grid layout by assigning learnable offset parameters to the filters.\\n\\nWhile the proposed contribution is interesting, its empirical evaluation is rather weak. Authors should report validation/test errors in addition to the training error to see how their approach generalizes to unseen examples. In addition, authors should compare their contribution with a proper baseline such as DRAW (that uses an uniform tiling of the input) in order to assess the benefit of learning the grid layout. They should also compare previous attention approaches such as Recurrent Models of Visual Attention (Mnih et al.) 2014 or Dynamic Capacity Network (Almahairi et al.) 2016. Finally, it would be interesting to try the approach on a more realistic dataset such as SVHN.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to reviewer ICLR 2016 workshop paper 163 reviewer 10\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\n> While the proposed contribution is interesting, its empirical evaluation is rather weak. Authors should report validation/test errors in addition to the training error to see how their approach generalizes to unseen examples.\\n\\nHere are the performance metrics for classification error on the test set (choosing best model on a validation set):\", \"translation_and_scaling\": \"7.3%\", \"translation_only\": \"8.8%\\nTranslation Only (without learnable mean and variance of filters): 16.9%\\nTranslation and Scaling (without learnable mean and variance of filters): 6.1%\\n\\n> In addition, authors should compare their contribution with a proper baseline such as DRAW (that uses an uniform tiling of the input) in order to assess the benefit of learning the grid layout.\\n\\nThe main goal of our paper is to demonstrate the emergent tiling properties given limitations on the abilities of the attention window and how a network learns to utilize this window. Rather than demonstrate improved classification performance, our goal with using a learnable tiling is to investigate how a neural network adapts to the imposed constraints on its retina.\\n\\nNote that with scaling and without learnable mean and variance of the filters, our model has an equivalent attention window to that proposed in DRAW. \\n\\n> Finally, it would be interesting to try the approach on a more realistic dataset such as SVHN.\\n\\nWe agree and have begun working on more complex datasets and tasks. In particular, we are investigating corresponding attributes in the dataset and task which lead to certain emergent properties in the attention window.\"}", "{\"title\": \"Response to reviewer ICLR 2016 workshop paper 163 reviewer 11\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\n> While the approach described in this paper is sensible, there is a major\\nproblem in that it is missing basic evaluation on their MNIST\\ntask.\\n\\nDue to space constraints, we did not include the quantitative evaluation of the performance on the MNIST dataset task. But here are the performance metrics for classification error on the test set (choosing best model on a validation set):\", \"translation_and_scaling\": \"7.3%\", \"translation_only\": \"8.8%\\nTranslation Only (without learnable mean and variance of filters): 16.9%\\nTranslation and Scaling (without learnable mean and variance of filters): 6.1%\\n\\nRather than demonstrate performance on specific toy tasks, the main goal of our paper is to demonstrate the emergent tiling properties given constraints on the abilities of the attention window and how a network learns to utilize this window. \\n\\nBut it is worth nothing that the above results show that the scaling feature consistently improves performance. This enables the glimpse generator to easily output a scale invariant representation to the recurrent network which improves classification performance. We will include these additional in the long version of the paper.\\n\\n> but the reader is left to guess\\nif only a few examples look like this (a basic overlap metric would help). The\\nanalogy to the human retina is certainly interesting, but it is unclear how\\nthis somewhat toyish model could \\\"discover the optimal tiling of retina .. \\nin a data driven manner\\\". \\n\\nWe have begun experimenting with more complex tasks to determine whether the high acuity region varies when there is more diversity in the size of the MNIST digit and whether it is dependent on task (classification/tracking/multi-digit search).\\n\\n> It would be sensible to cite the following papers for attention\\n Gregor et al. DRAW: A Recurrent Neural Network For Image Generation, 2015\\n Ba et al. Multiple Object Recognition with Visual Attention, 2015\\n\\nWe have included Ba et. al. 2015 in our citations. Gregor et. al. 2015 was already included in the original version.\\n\\n> Some of the equations (4)-(10) seem to be missing temporal indicies.\\n\\nWe have updated our current draft to include the temporal indices (they were originally left out to reduce clutter).\"}", "{\"title\": \"Interesting result on learning the tiling of units in a retina\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper proposes a neural network attention model that learns the tiling of the units in the retina. The model is a relatively minor variation on the attention mechanism from the DRAW model of Gregor et al. so the main novelty is in the experiments. The result showing that the model learns a layout with a high resolution fovea and a low resolution peripheral region only if it is not allowed to zoom in and out is very interesting.\", \"minor_comments\": [\"Paragraph 2 on page 1 refers to Figure 2 instead of Figure 1.\", \"Equations 4-10 would be clearer if they showed dependence on both m and n.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
XL9vPjMAjuXB8D1RUG6L
End to end speech recognition in English and Mandarin
[ "Dario Amodei", "Rishita Anubhai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Jingdong Chen", "Mike Chrzanowski", "Adam Coates", "Greg Diamos", "Erich Elsen", "Jesse Engel", "Linxi Fan", "Christopher Fougner", "Tony Han", "Awni Hannun", "Billy Jun", "Patrick LeGresley", "Libby Lin", "Sharan Narang", "Andrew Ng", "Sherjil Ozair", "Ryan Prenger", "Jonathan Raiman", "Sanjeev Satheesh", "David Seetapun", "Shubho Sengupta", "Yi Wang", "Zhiqian Wang", "Chong Wang", "Bo Xiao", "Dani Yogatama", "Jun Zhan", "Zhenyao Zhu" ]
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech–two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, enabling experiments that previously took weeks to now run in days. This allows us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale
[ "english", "speech recognition", "different languages", "system", "end", "mandarin end", "mandarin", "deep learning", "mandarin chinese", "entire pipelines" ]
https://openreview.net/pdf?id=XL9vPjMAjuXB8D1RUG6L
https://openreview.net/forum?id=XL9vPjMAjuXB8D1RUG6L
ICLR.cc/2016/workshop
2016
{ "note_id": [ "P7VnXM8DouKvjNORtJrW", "p8j4xP6nEcnQVOGWfpJy", "oVgMjALo6UrlgPMRsBor" ], "note_type": [ "review", "review", "review" ], "note_created": [ 1457691564457, 1457666345605, 1457717911377 ], "note_signatures": [ [ "ICLR.cc/2016/workshop/paper/81/reviewer/10" ], [ "~Tara_N_Sainath1" ], [ "ICLR.cc/2016/workshop/paper/81/reviewer/11" ] ], "structured_content_str": [ "{\"title\": \"Large-scale exploration of new and old tip and tricks for RNN-based speech recognition\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper reviews the authors' experience in building large-scale character-based CTC RNN speech recognition systems for English and Mandarin, combining several existing optimization and robustness tricks and a couple of new variants. Some practitioners are likely to find this paper useful as an addition to the growing understanding of what works and what doesn't. On the other hand, the paper is lacking in comparisons and is unclear at times, and some of the results seem questionable.\", \"quality\": \"It is probably technically correct but I have some reservations -- see below.\", \"clarity\": [\"Could be much clearer. For example:\", \"What is really meant by end-to-end?\", \"The input acoustic features are not totally clear. Is 20ms the size of the analysis window, the frame skip, or both? By \\\"spectrogram\\\" do you really mean \\\"spectrum\\\"? If \\\"spectrogram\\\", then more info is needed on the spectrogram parameters.\", \"Fully define batch normalization. What is the function B(.)?\"], \"significance\": \"Some practitioners are likely to find this paper useful as an addition to the growing understanding of what works and what doesn't. However, since there is little analysis, this limits the usefulness since it is not clear why certain techniques worked better for the authors than for others in prior work, and vice versa.\", \"pros\": [\"It is useful to see how a variety of common techniques are affected by variables such as data set size and noisy vs. clean test data, on a larger scale than is typical in ASR papers.\", \"The speed section provides a useful data point as more groups try to scale up their systems.\"], \"cons\": [\"There are multiple result tables on different data sets that are not comparable to each other. It would be much more helpful to keep the data sets the same across tables, and to include more of the standard benchmarks, in particular the commonly used Switchboard.\", \"The paper needs a clearer presentation of what exactly is novel vs. not (sequence normalization? SortaGrad?), and which conclusions are similar to vs. different from what's been found before (and, when different, why).\", \"The human WERs are surprisingly high; e.g. prior work has reported human WERs of around 1% for WSJ (see Lippmann, Speech Communication 1997). How many total turkers were used? How was their quality ensured? How was the label error measured?\", \"The paper describes what worked and what didn't, but the usefulness of the results is limited without a bit more analysis as to why. For example, the authors report that they did not have success with delaying the output as done in prior work, and it is not clear why.\", \"Citations are missing at times. For example Sec. 3.3 should cite prior work on 2D convolution for speech (e.g. Abdel-hamid et al. Interspeech 2013, Toth ICASSP 2014).\"], \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"This paper describes an end-to-end CTC system for English and Mandarin\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Quality:\\nThis paper is interesting, but novelty is a bit lacking and many of the experiments and claims are vague.\", \"clarity\": \"Paper is unclear to read, particularly the results which are all quoted on different test sets. In addition, the comparison to Human performance seems really biased.\", \"originality\": \"This work is very similar to H. Sak's CTC papers but now predicts characters. Many of the speedup ideas have also been tried in the literature as well. To me, this is more of an engineering paper that puts together different components already explored in the literature individually. In addition, there are some References are missing which i've noted below.\", \"significance_of_work\": \"Interesting approach for CTC with LVCSR task, putting together many different research ideas into one unified paper\", \"pros\": [\"Interesting approach for CTC with LVCSR task, putting together many different research ideas into one unified paper\"], \"cons\": \"* A lot of references in the paper are missing\\n a) for speeding up training with GPUs - cite Frank Seide's Interspeech 2014 paper, Amazon has a paper at Interspeech 2015\\n b) Hasim Sak's ICASSP 2015 paper should be cited on the first page when you say end-to-end since your work is very similar to this\\n c) H. Sak's Interspeech 2015 paper also does data augmentation and should be cited\\n* Many vague points in the paper:\\n a) Notion of end-to-end is unclear? Why is your method end-to-end if you are still using a separate acoustic and language model? To me this paper is just CTC predicting characters\\n b) Given intuition as to why sequence-wise norm is better than regular batch norm\\n c) The tables in the paper cannot be compared because they are all on different train/test sets, making things really confusing. Why are the numbers in Table 1 and 2 on different training sets\\n d) Why was it difficult to introduce a delay in emitting the label your system (like Sak 2015)\\n e) Your Human performance is based on two workers transcribing, this seems extremely biased\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"End-to-end speech recognition in English and Mandarin\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Significance\\nA useful paper on building end-to-end large scale speech recognition system for English and Mandarin. Describes many techniques and ideas in one place.\\n\\nClarity\\nThe paper assumes significant prior background knowledge on building end-to-end speech recognition system especially with RNNs, CTC training etc.\\n\\nNovelty\\nThere exists a very similar paper on arXiv.org - http://arxiv.org/abs/1512.02595 by the same set of authors. This prior art is not cited but describes almost identical techniques and experiments used in this paper. What is confusing to a reader who has read the prior work are the discrepancies in experimental results - sometimes the new results are better while sometimes they are worse. It is not clear what new techniques are being introduced in this paper and why the results differ.\\n\\nPros\\nUseful paper describing several techniques and ideas for end-to-end speech recognition.\\n\\nCons\\nLacks novelty with respect to earlier work of the authors and is missing a lot of citations. Assumes a lot prior background.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }