forum_id
stringlengths
8
20
forum_title
stringlengths
4
171
forum_authors
sequencelengths
0
25
forum_abstract
stringlengths
4
4.27k
forum_keywords
sequencelengths
1
10
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
13
note_type
stringclasses
6 values
note_created
int64
1,360B
1,736B
note_replyto
stringlengths
8
20
note_readers
sequencelengths
1
5
note_signatures
sequencelengths
1
1
venue
stringclasses
26 values
year
stringclasses
11 values
note_text
stringlengths
10
16.6k
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
AH9vZzWqgrHV-
review
1,388,926,500,000
Fav_FXoOhRFOQ
[ "everyone" ]
[ "David Krueger" ]
ICLR.cc/2014/workshop
2014
review: Interesting paper. My comments: Abstract: 'Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model.' - this does not appear to be true for the CNN model. 4. last sentence first paragraph is missing a 'to' at the end of the line 'models TO prevent overfitting' 6. 'It is challenging to...' sentence needs work 7. 'insertion penalty' and 'language model weighting' could use definitions or references. figure 1 -> table 1 7.1 The first claim (also made in the abstract) is not supported by the table for the SNN mimicking the CNN. It appears that ~15x as many parameters were needed to achieve the same level of performance. The last sentence of the first paragraph seems to acknowledge this... The second paragraph should, I think, be clarified. How are you increasing performance of the deep networks? What experiments did you perform that lead to this conclusion? 8. The last sentence does not seem supported to me. Your results as presented only achieve the same level of performance as previous results, and in order to achieve this level of performance, it would be necessary to use their training methods first so that your SNNs have something to mimic, correct?
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
v7XEhIcAFPvAa
comment
1,389,354,780,000
XX5Iws7jGn6gb
[ "everyone" ]
[ "Jimmy Ba" ]
ICLR.cc/2014/workshop
2014
reply: Yoshua, thank you for your comments. We believe you may have read an older draft and hope that most or all of the misleading statements were corrected in the Jan 3 draft. Nonetheless, many of your comments still apply to the current paper. We completely agree that generality would be improved with results on additional datasets. We submitted a workshop abstract instead of full paper because we only had results for one data set, and are about to run experiments on two other datasets. With TIMIT we did not use more training data to train the shallow models than was used to train the deep models. We used exactly the same 1.1M training cases used to train the DNN and CNN models to train the SNN mimic model. The only difference is that the mimic SNN does not see the original labels. Instead, it sees the real-valued probabilities predicted by the DNN or CNN it is trying to mimic. In general, model compression works best when a large unlabelled data set is available to be labeled by the “smart” model so that the smaller mimic model can be trained “hard” with less chance of overfitting. But for TIMIT unlabelled data was not available so we used the same data used to train the deep models for compression (mimic) training. We believe that the fact that no extra data --- labeled or unlabelled --- was used to train the SNN models helps drive home the point that it may be possible to train shallow models to be as accurate as deep models. We agree with your comment that “The paper makes it sound as if we could find a better way to train shallow nets in order to get results as good as deep nets, as if it was just an optimization issue.”, except that we view it more perhaps as an issue of regularization than of just optimization. In particular, we agree that depth, when combined with current learning and regularization methods such as dropout, is providing a prior that aids generalization, but are not sure that a similar effect could not be achieved using a different learning algorithm and regularization scheme to train a shallow net on the original data. In some sense we’re making a black-box argument: we already have a procedure that given a training set, yields a shallow net that has accuracy comparable to a deep fully-connected feedforward net trained on the same data. If we hadn’t shown you what the learning algorithm was in our black box would you have been 100% sure that the wizard behind the curtain must have been deep learning? The real question is whether the black box *must* go through the intermediate step of training a deep model to mimic, or whether there exist other learning and regularization procedures that could achieve the same result without going through the deep intermediary. We do not (yet) know the answer to this question, but it is interesting that a shallow model can be trained that is as accurate as a deep model without access to any additional data. We certainly agree that it is difficult to train large, shallow nets on the original targets with the learning procedures currently available. We agree that looking at training errors can be informative, but they might not resolve the issue in this case. If model compression has access to a very large unlabelled data set, if the mimic model has sufficient capacity to represent the deep model, the shallow model will learn to be a high-fidelity mimic of the deep model and will make the same predictions, and the error of the shallow mimic model and deep model on train and test data will be identical as the error of the mimic predictions compared to the deep model is driven to zero. This is for the ideal case where we have access to a very large unlabelled data set, which unfortunately we did not have for TIMIT. Exactly what training errors do you want to see: the error of the DNN on the original training data vs. the error of the SNN trained to mimic the DNN on the real-valued targets, but measured on the original labels of the training points, or vs. the error of an SNN trained on the original data and labels? Early stopping was used when training the deep models, but was not used when training the mimic SNN models. In fact we find it very difficult to make the SNN mimic model overfit when trained with L2 loss on continuous targets. Thanks for the pointers to other papers we should have cited. We’re happy to add them to the abstract. And thanks again for the careful read of our abstract. Sorry you had to struggle through the 1st draft.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
XX5Iws7jGn6gb
review
1,389,118,620,000
Fav_FXoOhRFOQ
[ "everyone" ]
[ "Yoshua Bengio" ]
ICLR.cc/2014/workshop
2014
review: This paper asks interesting questions and has interesting experimental results. The generality of the results could be improved by considering more than one dataset, though. You might want to first fix a typo in Rich's name... I concur with David Krueger regarding the somewhat misleading statements in the abstract and introduction etc regarding the matching of depth with width (and a LOT more training examples), which does not apply in the case of a convolutional net. This really needs to be fixed. My take on the results is however quite different from the conclusions given in the paper. The paper makes it sound as if we could find a better way to train shallow nets in order to get results as good as deep nets, as if it was just an optimization issue. My interpretation is quite different. The results seem more consistent with the interpretation that the depth (and convolutions) provide a PRIOR that helps GENERALIZING better. This is consistent with the fact that a much wider network is necessary in the convolutional case, and that in both cases you need to complement the shallow net's training set with the fake/mimic examples (derived from observing the outputs of the deep net on unlabeled examples) in order to match the performance of a deep net. I believe that my hypothesis could be disentangled from the one stated in the paper (which seems to say that it is a training or optimization issue) by looking at training error. According to my hypothesis, the shallow net's training error (without the added fake / mimic examples) should not be significantly worse than that of the deep net (at comparable number of parameters). According to the 'training' hypothesis that the authors seem to state, one would expect training error to be measurably lower for deep nets. In fact, for other reasons I would expect the deep net's training error to be worse (this would be consistent with previous results, starting with my paper with Dumitru Erhan et al in JMLR in 2010). It would be great to report those training errors. Note that to be fair, you have to report training error with no early stopping, continuing training for a fixed and large number of epochs (the same in both cases) with the best learning rate you could find (separately for each type of network). Finally, the fact that even shallow nets (especially wide ones) can be hard to train (see Yann Dauphin's ICLR 2013 workshop-track paper) also weakens the hope that we could get around the difficulty of training deep nets by better training shallow nets. Several more papers need to be cited and discussed. Besides my JMLR 2010 paper with Dumitru Erhan et al (Why Does Unsupervised Pre-training Help Deep Learning), another good datapoint regarding the questions raised here is the paper on Understanding Deep Architectures using a Recursive Convolutional Network, by Eigen, Rolfe & LeCun, submitted to this ICLR 2014 conference. Whereas my JMLR paper is about understanding the advantages of depth as a regularizer, this more recent paper tries to tease apart various architectural factors (including depth) influencing performance, especially for convolutional nets.
Fav_FXoOhRFOQ
Do Deep Nets Really Need to be Deep?
[ "Jimmy Lei Ba", "Rich Caurana" ]
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
[ "deep nets", "shallow", "deep", "shallow neural nets", "nets", "deep neural networks", "state", "art", "problems", "speech recognition" ]
https://openreview.net/pdf?id=Fav_FXoOhRFOQ
https://openreview.net/forum?id=Fav_FXoOhRFOQ
yxJGyrO9Y1LFo
review
1,391,460,360,000
Fav_FXoOhRFOQ
[ "everyone" ]
[ "anonymous reviewer a881" ]
ICLR.cc/2014/workshop
2014
title: review of Do Deep Nets Really Need to be Deep? review: An interesting workshop paper. For such a provocative title, more results are needed to support the conclusions. Part of the resurgent success of neural networks for acoustic modeling is due to making the networks “deeper” with many hidden layers (see F. Seide, G. Li, and D. Yu, 'Conversational Speech Transcription Using Context-Dependent Deep Neural Networks', ICASSP 2011 which shows that shallow networks perform worse than deep for the same # of parameters). This paper provides a different data point where a shallow network is trained using the author’s “MIMIC” technique performs as well as a deep network baseline on the TIMIT phone recognition task. The MIMIC technique involves using unsupervised soft labels from an ensemble of deep nets of unknown size and quality, including a linear layer of unknown size, and training on the un-normalized log prob rather than softmax output. The impact of each of these aspects on their own is not investigated; perhaps a deep neural network would gain from some or all of these MIMIC training steps as well.
-j1Hj5YWwrj_f
Generic Deep Networks with Wavelet Scattering
[ "Edouard Oyallon", "Stéphane Mallat", "Laurent Sifre" ]
We introduce a two-layer wavelet scattering network, which involves no learning, for object classification. This scattering transform computes a spatial wavelet transform on the first layer and a joint wavelet transform along spatial, angular and scale variables in the second layer. Image classification results are given on Caltech databases.
[ "wavelet", "generic deep networks", "network", "learning", "object classification", "transform", "spatial wavelet transform", "first layer", "joint wavelet transform", "spatial" ]
https://openreview.net/pdf?id=-j1Hj5YWwrj_f
https://openreview.net/forum?id=-j1Hj5YWwrj_f
IIb_NA8kBPNo2
review
1,391,404,140,000
-j1Hj5YWwrj_f
[ "everyone" ]
[ "anonymous reviewer 6006" ]
ICLR.cc/2014/workshop
2014
title: review of Generic Deep Networks with Wavelet Scattering review: * A brief summary of the paper's contributions, in the context of prior work. Paper describes experiment of feature extraction with wavelets applied to CALTACH dataset. * An assessment of novelty and quality. This is just a single numerical result. It gives some insight, but there is no novelty. pros : It is good that someone have done experiment on how powerful are wavelets on bigger dataset. cons : Very little of added value, small amount of content.
-j1Hj5YWwrj_f
Generic Deep Networks with Wavelet Scattering
[ "Edouard Oyallon", "Stéphane Mallat", "Laurent Sifre" ]
We introduce a two-layer wavelet scattering network, which involves no learning, for object classification. This scattering transform computes a spatial wavelet transform on the first layer and a joint wavelet transform along spatial, angular and scale variables in the second layer. Image classification results are given on Caltech databases.
[ "wavelet", "generic deep networks", "network", "learning", "object classification", "transform", "spatial wavelet transform", "first layer", "joint wavelet transform", "spatial" ]
https://openreview.net/pdf?id=-j1Hj5YWwrj_f
https://openreview.net/forum?id=-j1Hj5YWwrj_f
44UMwEGyamZ3Q
review
1,391,907,000,000
-j1Hj5YWwrj_f
[ "everyone" ]
[ "anonymous reviewer 06bb" ]
ICLR.cc/2014/workshop
2014
title: review of Generic Deep Networks with Wavelet Scattering review: I am not very familiar with scattering transform work so I cannot judge of the novelty of using 2 layers of wavelet transforms for classification. However the results are impressive in that they do not use any learning and still beat the best ImageNet-pretrained convolutional network on Caltech 101 when using 1 or 2 layers. It does not however on Caltech 256 and some insight into why that might be would have been nice. pros: - good results with small number of layers cons: - no experiment with more layers, does it degrade drastically beyond 2 layers?
-j1Hj5YWwrj_f
Generic Deep Networks with Wavelet Scattering
[ "Edouard Oyallon", "Stéphane Mallat", "Laurent Sifre" ]
We introduce a two-layer wavelet scattering network, which involves no learning, for object classification. This scattering transform computes a spatial wavelet transform on the first layer and a joint wavelet transform along spatial, angular and scale variables in the second layer. Image classification results are given on Caltech databases.
[ "wavelet", "generic deep networks", "network", "learning", "object classification", "transform", "spatial wavelet transform", "first layer", "joint wavelet transform", "spatial" ]
https://openreview.net/pdf?id=-j1Hj5YWwrj_f
https://openreview.net/forum?id=-j1Hj5YWwrj_f
JzqmzHoBmuCj_
comment
1,392,860,400,000
44UMwEGyamZ3Q
[ "everyone" ]
[ "Edouard Oyallon" ]
ICLR.cc/2014/workshop
2014
reply: Dear reviewer, We would like to thank you for your helpful comments. Two layers wavelets transforms had been previously used for MNIST digit and texture classification but never over complex data bases such as CalTech. Concerning the compared performance on CalTech 101 and CalTech 256, we are currently redoing the experiments on Caltech 256 to understand the loss of performance. It looks like it is due to a choice of wavelets which were Haar wavelets along rotations, and seems to have impaired performances. Until now, we observed that using a third layer with wavelet convolutions does not improve results relatively to two layers (but it does not degrade it either). We believe that beyound the second layer, we will need to learn the deep network filters, but this still needs to be fully checked. These comments are added in the abstract.
-j1Hj5YWwrj_f
Generic Deep Networks with Wavelet Scattering
[ "Edouard Oyallon", "Stéphane Mallat", "Laurent Sifre" ]
We introduce a two-layer wavelet scattering network, which involves no learning, for object classification. This scattering transform computes a spatial wavelet transform on the first layer and a joint wavelet transform along spatial, angular and scale variables in the second layer. Image classification results are given on Caltech databases.
[ "wavelet", "generic deep networks", "network", "learning", "object classification", "transform", "spatial wavelet transform", "first layer", "joint wavelet transform", "spatial" ]
https://openreview.net/pdf?id=-j1Hj5YWwrj_f
https://openreview.net/forum?id=-j1Hj5YWwrj_f
3fuqf0Yvez3Ry
comment
1,392,824,580,000
IIb_NA8kBPNo2
[ "everyone" ]
[ "Edouard Oyallon" ]
ICLR.cc/2014/workshop
2014
reply: Dear reviewer, We would like to thank you for your review and will try to clarify some points. Currently deep networks are very powerful but there is a lack of understanding of the type of processing it implements. The novelty of the paper is to show that to reach 67% there is no need to learn the deep network weights which can be taken to be wavelets along spatial, rotation, and scaling variables. It is the first time that such results hold for a relatively complex image data basis. All filters are separable and thus have a fast implementation. Learning can then be applied to improve these predefined filters or to add more layers to the network. Understanding the processing of deep networks and how to simplify it is currently a very important challenge. It was a surprise for us to see that even complex image data bases such as CalTech can be tackled with predefined wavelet filters, and matching the performance of the double layer ImageNet filters on CalTech is not an easy task. We believe that the results open the possibility to further important simplifications of these networks and their filters.
AKIW-22FWrKkR
End-to-End Text Recognition with Hybrid HMM Maxout Models
[ "Ouais Alsharif", "Joelle Pineau" ]
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.
[ "text recognition", "hybrid hmm", "models", "problem", "detecting", "text", "natural scenes", "challenging", "counterpart", "documents" ]
https://openreview.net/pdf?id=AKIW-22FWrKkR
https://openreview.net/forum?id=AKIW-22FWrKkR
zfs12lBVTLwEF
review
1,391,539,560,000
AKIW-22FWrKkR
[ "everyone" ]
[ "anonymous reviewer f488" ]
ICLR.cc/2014/workshop
2014
title: review of End-to-End Text Recognition with Hybrid HMM Maxout Models review: The authors present a complete hybrid system for recognition characters and words from a real world natural scenes. The clue idea is to cede word-to character + character classification and segmentations correction into three convolutional neural networks with maxout non-linearity. Word-to character model makes and additional use of HMM to better deal with sequential aspect of character segmentations across the words. I think the community can benefit from this work as it shows some interesting experiments, and what is even more important, does it in the context of the complete system. Experimental aspect is sufficient and explore various sub-problems the potential Reader could be interested in, for example, the use and impact of lexicon and different language models on the final system accuracy. It's also important - the presented solution gives a state of the art accuracy To summarize, I think the authors put a lot of effort into this work and present some nice experimental results. Suggestions for improvements: - I would appreciate an implicit distinction between likelihoods and probabilities, for example, by p and P respectively. That applies to all paper content starting from the equation 1 and including occasional in-text references to certain probabilistic quantities. - (thing to consider) Section 3.4 argmax_Qp(Q|O) - isn't HMM producing argmax_Q p(O|Q) - i.e. the likelihood of state sequence Q producing the observation sequence O? If you agree with this comment, you need also to fix it in the last paragraph of sec 5.1.
AKIW-22FWrKkR
End-to-End Text Recognition with Hybrid HMM Maxout Models
[ "Ouais Alsharif", "Joelle Pineau" ]
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.
[ "text recognition", "hybrid hmm", "models", "problem", "detecting", "text", "natural scenes", "challenging", "counterpart", "documents" ]
https://openreview.net/pdf?id=AKIW-22FWrKkR
https://openreview.net/forum?id=AKIW-22FWrKkR
nn0rTlhW6_n3U
comment
1,391,807,820,000
zfs12lBVTLwEF
[ "everyone" ]
[ "Ouais Alsharif" ]
ICLR.cc/2014/workshop
2014
reply: Thank you for these comments :) - I think the first point is a valid point. We should factor this in. - As for the second point, the HMM actually produces argmax_Q(p(Q|O). through the Viterbi algorithm. This is more elucidated in equation 30 in Rabiner's tutorial: http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf Thank you
AKIW-22FWrKkR
End-to-End Text Recognition with Hybrid HMM Maxout Models
[ "Ouais Alsharif", "Joelle Pineau" ]
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.
[ "text recognition", "hybrid hmm", "models", "problem", "detecting", "text", "natural scenes", "challenging", "counterpart", "documents" ]
https://openreview.net/pdf?id=AKIW-22FWrKkR
https://openreview.net/forum?id=AKIW-22FWrKkR
9Y8Z9zerZ-9rV
review
1,391,808,600,000
AKIW-22FWrKkR
[ "everyone" ]
[ "Ouais Alsharif" ]
ICLR.cc/2014/workshop
2014
review: Thank you for the comments. :) Your comment regarding beam search is correct, this is (almost) a standard beam search. However, we wanted to make it clear that the search is on the cascade. Regarding the Q_i. You are correct, we do not define it clearly, and like you inferred, it is just a priority queue. s_i and v_i are the same thing. We will change this such that they are a single variable. The 55.6% figure is on the ICDAR dataset. Using a language model and a lexicon does not improve the results beyond a language model. The reason for that is because the language model biases the results of the beam search heavily. When the minimal edit distance is reached for more than one word, we broke ties lexicographically. I think a small interesting area would be to investigate how other kinds of edit distance would help in correcting misspelled words, as this particular point seems to be able to lift test accuracy 2-3% points. Could you please clarify the last comment? I'm not sure how using a hash table would improve the accuracy in such a scenario. Thank you for these comments :) You've been most helpful.
AKIW-22FWrKkR
End-to-End Text Recognition with Hybrid HMM Maxout Models
[ "Ouais Alsharif", "Joelle Pineau" ]
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.
[ "text recognition", "hybrid hmm", "models", "problem", "detecting", "text", "natural scenes", "challenging", "counterpart", "documents" ]
https://openreview.net/pdf?id=AKIW-22FWrKkR
https://openreview.net/forum?id=AKIW-22FWrKkR
mi6H4jocJNDxd
review
1,391,663,100,000
AKIW-22FWrKkR
[ "everyone" ]
[ "anonymous reviewer de8c" ]
ICLR.cc/2014/workshop
2014
title: review of End-to-End Text Recognition with Hybrid HMM Maxout Models review: This paper presents a system for text recognition from natural images that leverages recent advances in deep learning. Contrarily to previous methods that often focus on a single aspect, this work addresses all simpler sub-problems and incorporates classifiers into different sub-modules of the whole system. The resulting method achieves impressive results and seems computationally efficient. The paper is very well written, comprehensive and does a good job at condensing a lot of information into 9 pages. A minor weakness concerns the novelty aspect: the paper mostly reuses existing algorithms, such as convolutional maxout networks, hybrid HMMs, beam search, MSER. What the authors call 'Cascade Beam Search' is in fact ordinary beam search. However, the pipeline is novel and produces good results and insightful discussion, especially about the trade-offs involved. - Section 5.3 (especially Algorithm 1): I found the notation confusing. Can you define Q_i in words? are those implemented with priority queues? What is the difference between intervals s_i and v_i? - Section 5.4: The 55.6% figure is obtained for which dataset? Why not use both a lexicon and a language model? What happens when the minimal edit distance is reached for more than one word? - The authors' main movitation for a language model is to achieve 'constant time in lexicon size per query', while the edit distance technique is slower but apparently has a better accuracy. Would it be possible to use hash tables to improve accuracy whenever the minimal edit distance is zero?
AKIW-22FWrKkR
End-to-End Text Recognition with Hybrid HMM Maxout Models
[ "Ouais Alsharif", "Joelle Pineau" ]
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.
[ "text recognition", "hybrid hmm", "models", "problem", "detecting", "text", "natural scenes", "challenging", "counterpart", "documents" ]
https://openreview.net/pdf?id=AKIW-22FWrKkR
https://openreview.net/forum?id=AKIW-22FWrKkR
oR0WwODNrPSba
comment
1,393,982,100,000
9Y8Z9zerZ-9rV
[ "everyone" ]
[ "anonymous reviewer de8c" ]
ICLR.cc/2014/workshop
2014
reply: It would make more sense to me to select the most likely word that reaches the minimal edit distance, instead of breaking ties lexicographically. Concerning the last comment, you could use a combination of the two modes by first searching for the most likely word present in the lexicon, and reverting to the language model method in case none of the words were matched. This would allow you to stay within the constant time in lexicon size constraint because merely checking if a word is present in the lexicon is very fast using a hash table. Overall, those are minor points and will not necessarily improve accuracy, but I was curious as to why those strategies were not used.
Bg3GB1suG0qx6
Learning Information Spread in Content Networks
[ "Cédric Lagnier", "Ludovic Denoyer", "Sylvain Lamprier", "Simon Bourigault", "patrick gallinari" ]
We introduce a model for predicting the diffusion of content information on social media. When propagation is usually modeled on discrete graph structures, we introduce here a continuous diffusion model, where nodes in a diffusion cascade are projected onto a latent space with the property that their proximity in this space reflects the temporal diffusion process. We focus on the task of predicting contaminated users for an initial initial information source and provide preliminary results on differents datasets.
[ "information spread", "content networks", "model", "diffusion", "content information", "social media", "propagation", "discrete graph structures", "continuous diffusion model", "nodes" ]
https://openreview.net/pdf?id=Bg3GB1suG0qx6
https://openreview.net/forum?id=Bg3GB1suG0qx6
bG6SG57HSkEHC
review
1,391,090,100,000
Bg3GB1suG0qx6
[ "everyone" ]
[ "anonymous reviewer 9e53" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Information Spread in Content Networks review: Learning Information Spread The ms considers the interesting question of diffusion and information spreading in content networks. The modeling is performed by a diffusion kernel with ranking and classification constraints; having both is an innovation. While the paper is generally clear, I miss details on how this is exactly done and how this is optimized. The optimization problem seems nontrivial, so it would be nice to know the computational effort. While first experiments show encouraging results, it is unclear whether in a simple toy problem the proposed estimation algorithm will find the ground truth consistently. Furthermore no comparison to other models that consider information spread are given, neither in speed/scaling nor accuracy/severity of errors. Concluding, the paper is interesting but somewhat preliminary; it consists in a small increment over Bourigault et al. 2014 and lacks many details.
Bg3GB1suG0qx6
Learning Information Spread in Content Networks
[ "Cédric Lagnier", "Ludovic Denoyer", "Sylvain Lamprier", "Simon Bourigault", "patrick gallinari" ]
We introduce a model for predicting the diffusion of content information on social media. When propagation is usually modeled on discrete graph structures, we introduce here a continuous diffusion model, where nodes in a diffusion cascade are projected onto a latent space with the property that their proximity in this space reflects the temporal diffusion process. We focus on the task of predicting contaminated users for an initial initial information source and provide preliminary results on differents datasets.
[ "information spread", "content networks", "model", "diffusion", "content information", "social media", "propagation", "discrete graph structures", "continuous diffusion model", "nodes" ]
https://openreview.net/pdf?id=Bg3GB1suG0qx6
https://openreview.net/forum?id=Bg3GB1suG0qx6
F0CYlZ-5YhF54
review
1,392,193,800,000
Bg3GB1suG0qx6
[ "everyone" ]
[ "anonymous reviewer 5bc5" ]
ICLR.cc/2014/workshop
2014
title: review of Learning Information Spread in Content Networks review: This work proposed an extension of the content diffusion kernel model by adding an additional classification constraint. My main concern of the current version of this work is whether the classification constraint is proper for this task (modeling the spread of information in social network): The information spread could not only depended on the proximity of users but also on the content of information. In other words, one specific user could not be contaminated by the source in one cascade but rather in other cascades. If that's the case, the classification constraints will not be satisfied and the learning cannot converge; This classification constraint could also undermine the performance of ranking. It is indeed shown in Table 1. Pros -- well-written and organized Cons --the proposed classification constraint might not be suitable for modeling the spread of information in social networks.
Bg3GB1suG0qx6
Learning Information Spread in Content Networks
[ "Cédric Lagnier", "Ludovic Denoyer", "Sylvain Lamprier", "Simon Bourigault", "patrick gallinari" ]
We introduce a model for predicting the diffusion of content information on social media. When propagation is usually modeled on discrete graph structures, we introduce here a continuous diffusion model, where nodes in a diffusion cascade are projected onto a latent space with the property that their proximity in this space reflects the temporal diffusion process. We focus on the task of predicting contaminated users for an initial initial information source and provide preliminary results on differents datasets.
[ "information spread", "content networks", "model", "diffusion", "content information", "social media", "propagation", "discrete graph structures", "continuous diffusion model", "nodes" ]
https://openreview.net/pdf?id=Bg3GB1suG0qx6
https://openreview.net/forum?id=Bg3GB1suG0qx6
PkdaYJEeb49CF
review
1,392,249,960,000
Bg3GB1suG0qx6
[ "everyone" ]
[ "Ludovic Denoyer" ]
ICLR.cc/2014/workshop
2014
review: Dear reviewer, Thank you for the time spent for this review. Forgetting the content when modeling information propagation is a classical assumption made in the literrature. Moreover some datasets do not contain content information, or this information is so noisy that it cannot be used for prediction. But you are right, content is important. Actually, the paper is a short paper proposing an extension of a previoulsy published model which is able to consider the content information. Due to the limited size of the short papers, we have decided to focus on the 'without content' version of our previous model which obtains reasonnably good performance in comparison to the 'with content' version. But using content information in the proposed approach is not so complicated. To follow your remark, we will add a small paragraph in the current paper explaining how content information can be taken into account. Thank you
Bg3GB1suG0qx6
Learning Information Spread in Content Networks
[ "Cédric Lagnier", "Ludovic Denoyer", "Sylvain Lamprier", "Simon Bourigault", "patrick gallinari" ]
We introduce a model for predicting the diffusion of content information on social media. When propagation is usually modeled on discrete graph structures, we introduce here a continuous diffusion model, where nodes in a diffusion cascade are projected onto a latent space with the property that their proximity in this space reflects the temporal diffusion process. We focus on the task of predicting contaminated users for an initial initial information source and provide preliminary results on differents datasets.
[ "information spread", "content networks", "model", "diffusion", "content information", "social media", "propagation", "discrete graph structures", "continuous diffusion model", "nodes" ]
https://openreview.net/pdf?id=Bg3GB1suG0qx6
https://openreview.net/forum?id=Bg3GB1suG0qx6
FQHhO9DehTO2w
comment
1,391,526,360,000
bG6SG57HSkEHC
[ "everyone" ]
[ "Ludovic Denoyer" ]
ICLR.cc/2014/workshop
2014
reply: Dear reviewer, Thank you for your comments. The lack of details was mainly due to the official paper size limitation (3 pages), but submitting longer papers seem to be possible. We have posted a new version of the paper on Arixv that contains: * the pseudo-code of the SGD algorithm * A paragraph concerning the learning and inference complexity of the model, showing its efficiency w.r.t existing discrete approaches * A comparison with state of the art baselines in the experimental section Ludovic
wJuDwQ-d3XJMj
Unsupervised feature learning by augmenting single images
[ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Thomas Brox" ]
When deep learning is applied to visual object recognition, data augmentation is often used to generate additional training data without extra labeling cost. It helps to reduce overfitting and increase the performance of the algorithm. In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture. To that end we sample a set of random image patches and declare each of them to be a separate single-image surrogate class. We then extend these trivial one-element classes by applying a variety of transformations to the initial 'seed' patches. Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks. We find that this simple feature learning algorithm is surprisingly successful, achieving competitive classification results on several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
[ "data augmentation", "algorithm", "unsupervised feature learning", "single images", "deep learning", "visual object recognition", "additional training data", "extra labeling cost", "performance" ]
https://openreview.net/pdf?id=wJuDwQ-d3XJMj
https://openreview.net/forum?id=wJuDwQ-d3XJMj
fiPAUv7VOTUR5
review
1,392,727,800,000
wJuDwQ-d3XJMj
[ "everyone" ]
[ "Alexey Dosovitskiy" ]
ICLR.cc/2014/workshop
2014
review: An updated version of the paper is now available on arXiv. Main changes are: - extended related work, including brief discussion of connection to metric learning - an experiment on classification with random filters: plot in fig. 4, description in the beginning of section 3.2
wJuDwQ-d3XJMj
Unsupervised feature learning by augmenting single images
[ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Thomas Brox" ]
When deep learning is applied to visual object recognition, data augmentation is often used to generate additional training data without extra labeling cost. It helps to reduce overfitting and increase the performance of the algorithm. In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture. To that end we sample a set of random image patches and declare each of them to be a separate single-image surrogate class. We then extend these trivial one-element classes by applying a variety of transformations to the initial 'seed' patches. Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks. We find that this simple feature learning algorithm is surprisingly successful, achieving competitive classification results on several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
[ "data augmentation", "algorithm", "unsupervised feature learning", "single images", "deep learning", "visual object recognition", "additional training data", "extra labeling cost", "performance" ]
https://openreview.net/pdf?id=wJuDwQ-d3XJMj
https://openreview.net/forum?id=wJuDwQ-d3XJMj
TN0nh4UfnshfF
review
1,392,237,300,000
wJuDwQ-d3XJMj
[ "everyone" ]
[ "Alexey Dosovitskiy" ]
ICLR.cc/2014/workshop
2014
review: We thank the reviewers for the positive feedback and useful comments. It is certainly true that the paper could include more experiments and comparisons (as both reviewers point out), that is why we submitted it as a short workshop paper. We will include more experiments in follow-up versions of the paper. Reviewer 1 (Anonymous 672b) points out that we do not discuss complexity issues and details of the algorithm such as used transformations and the effect of dropout. The complexity is the same as for training convolutional neural networks: in our experiments training usually takes 0.5 to 3 days, depending on the size of the network. We discuss details of the applied transformations in section 2.1. Using dropout is nowadays a standard practice for training deep convolutional neural networks and studies have been published by others, hence we do not analyze its effect in detail (however, preliminary experiments show that the benefits are quite large). Reviewer 2 (Anonymous 536d) points out that we do not discuss the connection to metric learning approaches and that our approach is very much similar to those. We thank the reviewer for pointing out this connection and will include a corresponding remark in the paper. However, we do not agree that our approach is very similar to the one proposed in [1]. First of all, their algorithm uses label information, while ours does not. Secondly, even if we applied the algorithm from [1] to our surrogate clusters, our discriminative objective is different from the 'spring system' objective in [1]. Our objective yields features which perform well in classification without need to specify parameters such as functions that represent 'attractive' and 'repulsive' forces, as in [1]. Finally, the quality of the features learned by our algorithm is demonstrated by good classification results. On the other hand, the paper [1] does not show any. We will upload a newer version of the paper, modified according to some of the remarks, later this week. Best regards, Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox ----------- References: [1] Raia Hadsell, Sumit Chopra and Yann LeCun: Dimensionality Reduction by Learning an Invariant Mapping, Proc. Computer Vision and Pattern Recognition Conference (CVPR'06), IEEE Press, 2006
wJuDwQ-d3XJMj
Unsupervised feature learning by augmenting single images
[ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Thomas Brox" ]
When deep learning is applied to visual object recognition, data augmentation is often used to generate additional training data without extra labeling cost. It helps to reduce overfitting and increase the performance of the algorithm. In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture. To that end we sample a set of random image patches and declare each of them to be a separate single-image surrogate class. We then extend these trivial one-element classes by applying a variety of transformations to the initial 'seed' patches. Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks. We find that this simple feature learning algorithm is surprisingly successful, achieving competitive classification results on several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
[ "data augmentation", "algorithm", "unsupervised feature learning", "single images", "deep learning", "visual object recognition", "additional training data", "extra labeling cost", "performance" ]
https://openreview.net/pdf?id=wJuDwQ-d3XJMj
https://openreview.net/forum?id=wJuDwQ-d3XJMj
JJNs1ddmaWfyT
review
1,391,729,340,000
wJuDwQ-d3XJMj
[ "everyone" ]
[ "anonymous reviewer 536d" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised feature learning by augmenting single images review: This paper proposes to reduce the unsupervised feature learning problem to a classification problem by: a) sampling patches at random from (unlabeled) images (in the order of several thousands) and b) creating surrogate classification tasks by considering each patch as a class and by generating several other samples by applying transformations (e.g., translation, rotation, scaling, etc.). The features trained in this manner are used as patch descriptors for to classify images in Caltech 101, CIFAR and STL-10 datasets. The method compares well with other feature learning methods. The paper reads well and has a clear narrative. The reduction from unsupervised to supervised learning is presented in an intriguing way. On the other hand, this method seems closely related to work in metric learning and it would be nice to have an explicit discussion about this. Pros + clearly written + simple idea + empirical analysis demonstrates good results Cons - some baseline experiments are missing, namely - compare to random filters (i.e., what’s the role played by the architecture used) - it would be nice to see a comparison of the accuracy after fine-tuning the whole system - prior reference to work in metric learning (neighborhood component analysis, DrLIM style) is not mentioned. One can cast a similar learning problem: making the features of patches belonging to the same “class” be similar, and making the features of patches belonging to different “classes” be as far as possible. I believe that by using a ranking loss on such triplets would yield similar results. Under this view, the paper would become very much similar to: Raia Hadsell, Sumit Chopra and Yann LeCun: Dimensionality Reduction by Learning an Invariant Mapping, Proc. Computer Vision and Pattern Recognition Conference (CVPR'06), IEEE Press, 2006 except that the generation of similar and different patches is produced by transformation known in advance. One advantage of these metric learning approaches is that they naturally scale to an “infinite” number of “classes”. Minor details: - the schedule on the number of classes seems rather hacky - the overfitting hypothesis in sec. 3.2 could be easily be tested.
wJuDwQ-d3XJMj
Unsupervised feature learning by augmenting single images
[ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Thomas Brox" ]
When deep learning is applied to visual object recognition, data augmentation is often used to generate additional training data without extra labeling cost. It helps to reduce overfitting and increase the performance of the algorithm. In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture. To that end we sample a set of random image patches and declare each of them to be a separate single-image surrogate class. We then extend these trivial one-element classes by applying a variety of transformations to the initial 'seed' patches. Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks. We find that this simple feature learning algorithm is surprisingly successful, achieving competitive classification results on several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
[ "data augmentation", "algorithm", "unsupervised feature learning", "single images", "deep learning", "visual object recognition", "additional training data", "extra labeling cost", "performance" ]
https://openreview.net/pdf?id=wJuDwQ-d3XJMj
https://openreview.net/forum?id=wJuDwQ-d3XJMj
stMdtDjJlgWU8
review
1,391,695,860,000
wJuDwQ-d3XJMj
[ "everyone" ]
[ "anonymous reviewer 672b" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised feature learning by augmenting single images review: The paper presents an approach for learning the filters of a convolutional NN, for an image classification task, without making use of target labels. The algorithm proceeds in two steps: learning a transformation of the original image and then learning a classifier using this new representation. For the first step, patches are sampled from an image collection, each patch will then correspond to a surrogate class and a classifier will be trained to associate transformed versions of the patches to the corresponding class labels using a convolutional net. In a second step, this net is replicated on whole images leading to a transformed representation of the original image. A linear classifier is then trained using this representation as input and the target labels relative to the image collection. Experiments are performed on different image collections and a comparison with several baselines is provided. This paper introduces a simple idea for feature learning which seems to work relatively well. The paper could be easily improved or extended in several ways. A natural extension would be to tune the learned filters using the target labels, which would allow a comparison with state of the art supervised techniques. This method might be less expensive for training than some of the alternatives, but the complexity issues are not discussed at all. The choices made for the convolutional net produce very dense codes. This could be discussed and a comparison with alternatives, e.g. larger filter size could be provided. Also there could be more practical details like what are the combinations of transformations used for the patches, what is the increase provided by the dropout etc.
cO4ycnpqxKcS9
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ]
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].
[ "image classification models", "convolutional networks", "class score", "image", "class", "deep", "saliency maps deep", "saliency maps", "visualisation", "learnt" ]
https://openreview.net/pdf?id=cO4ycnpqxKcS9
https://openreview.net/forum?id=cO4ycnpqxKcS9
jrhLjNJlLRr45
review
1,392,699,660,000
cO4ycnpqxKcS9
[ "everyone" ]
[ "Karen Simonyan" ]
ICLR.cc/2014/workshop
2014
review: We thank the reviewers for their positive feedback. R1: Not impressed by the per-class canonical image generation ... given the work of Erhan et al. We agree that our per class canonical image visualisation is based on that of Erhan [5], and thus not the most original part of the paper. However, it is worth including for two reasons: 1) As noted by R2, we are the first to apply it to ImageNet classification ConvNets, and furthermore we visualise the activities in the final fully-connected layer (rather than the soft-max layer, which leads to less prominent visualisations). 2) We establish a close connection between gradient-based visualisation techniques (such as canonical image generation and class saliency maps) and Deconvolutional networks of Zeiler and Fergus. R1: I think the saliency map method is quite interesting. In particular, the fact that it can be leveraged to obtain a decent object localizer, which is only partially supervised, seems impressive. R2: I found the weakly supervised object localization application the most impressive part of the paper. We also feel that this is one of the most interesting aspects of our contribution. In particular, we were also impressed by the ability of the network to learn object segmentation in a *weakly supervised* setting (and produce object localisations, competitive with strongly supervised conventional object detectors). We believe that our image specific class saliency maps can be used in applications beyond GraphCut segmentation initialisation, and we plan to address them in future work.
cO4ycnpqxKcS9
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ]
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].
[ "image classification models", "convolutional networks", "class score", "image", "class", "deep", "saliency maps deep", "saliency maps", "visualisation", "learnt" ]
https://openreview.net/pdf?id=cO4ycnpqxKcS9
https://openreview.net/forum?id=cO4ycnpqxKcS9
jKcYnfEAY_jL-
review
1,391,887,080,000
cO4ycnpqxKcS9
[ "everyone" ]
[ "anonymous reviewer 9e94" ]
ICLR.cc/2014/workshop
2014
title: review of Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps review: Deep convolutional neural networks (convnets) have achieved tremendous success lately in large-scale visual recognition. Their popularity has exploded after winning recent high-profile competitions. As more research groups begin to experiment with convnets, there is increasing interest into what is happening *inside* the convnet. A few papers have been published within the last year providing ways of visualizing what units inside the convnet represent, and also visualizing the spatial support of a particular class. This paper presents two methods for visualization of convnets: one, based on an approach by Erhan, simply backpropagates the gradient of the class score with respect to the image pixels to generate class appearance models. The other method, class-saliency maps, visualize spatial support for a class and are also used to perform weakly supervised object localization. The authors are very clear about their contributions, mainly in producing understandable visualizations. In terms of novelty, the method by which one obtains the class appearance models have been used in the unsupervised learning context by Erhan et al., but this paper is the first to apply the technique to convnets. The method by which to obtain the class saliency maps is intuitive and produces reasonable visualizations. I found the weakly supervised object localization application the most impressive part of the paper. Although it does not perform nearly as well as methods that consider localization part of training, it's promising to see how localization can be learned without bounding boxes. Pros: * Clear, simple * Provides a useful, practical tool for convnet practitioners * The discussion re: Zeiler and Fergus' Deconvolutional net method clears up any misunderstanding among the two methods which do seem pretty similar * Evaluated on large-scale data (ILSVRC-2013) Cons * Though the similarity to the Deconvolutional net method is acknowledged, the technical contribution of this work is not a massive departure from the other work (though suitable for workshop track) Overall, I think this is a good workshop paper. The methodology does not depart far from previous work but the weakly supervised localization is interesting and will generate interest at the conference. Comments ======== Further insight/discussion on the implication of the change in treatment between the Deconvolutional net method and the proposed method is suggested.
cO4ycnpqxKcS9
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ]
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].
[ "image classification models", "convolutional networks", "class score", "image", "class", "deep", "saliency maps deep", "saliency maps", "visualisation", "learnt" ]
https://openreview.net/pdf?id=cO4ycnpqxKcS9
https://openreview.net/forum?id=cO4ycnpqxKcS9
XDxVTYb9VtT9N
review
1,390,788,960,000
cO4ycnpqxKcS9
[ "everyone" ]
[ "anonymous reviewer e565" ]
ICLR.cc/2014/workshop
2014
title: review of Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps review: This paper presents methods for visualizing the behaviour of an object recognition convolutional neural network. The first method generates a 'canonical image' for a given class that the network can recognize. The second generates a saliency map for a given input image and specified class, that illustrates the part of the image (pixels) that influence the most the given class's output probability. This can be used to seed a graphcut segmentation and localize objects of that class in the input image. Finally, a connection between the saliency map method and the work of Zeiler and Fergus on using deconvolutions to visualize deep networks is established. While I'm not impressed by the per-class canonical image generation (which isn't very original anyways, given the work of Erhan et al.), I think the saliency map method is quite interesting. In particular, the fact that it can be leveraged to obtain a decent object localizer, which is only partially supervised, seems impressive. This is probably the most interesting part of the paper. As for the connection with deconvolution, I think it's also a nice observation. As for the cons of this paper, they are those you expect from a workshop paper, i.e. the experimental work could be stronger. Specifically, I feel like there is a lack of quantitative comparisons. I wonder whether other alternatives to the graphcut initialization could have served as baselines with which to compare quantitatively (but this isn't my expertise, so perhaps there aren't...). The fact that one of their previous systems (which was fully supervised for localization) actually performs worse than this partially supervised system is certainly impressive however!
0yNguO_G2aycf
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
[ "Takashi Shinozaki", "Yasushi Naruse" ]
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer and enables robust and simultaneous leaning on multilayer neural network.
[ "feedforward supervisory signal", "competitive learning", "multilayered networks competitive", "multilayered networks", "novel", "multilayered neural networks", "supervisory signal", "classification", "new input", "input" ]
https://openreview.net/pdf?id=0yNguO_G2aycf
https://openreview.net/forum?id=0yNguO_G2aycf
Zufc7LsNO-Z3U
review
1,391,915,280,000
0yNguO_G2aycf
[ "everyone" ]
[ "anonymous reviewer fd8d" ]
ICLR.cc/2014/workshop
2014
title: review of Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks review: The paper proposes a modified learning rule for competitive learning in multilayer neural networks. The proposed learning algorithm is not clearly explained, in particular the meaning of the variables x_adv and x_target. The variable x_adv is defined to be the 'advance input', but I do not understand what advance input exactly is. Apart from these clarity issues, the method seems relatively elaborated with a pretraining stage and an architecture consisting of a hierarchy of self-organizing maps. In the conclusion, the authors claim that the method improves the accuracy of the classification task. However, it is unclear which improvement, as the reported accuracy of approximately 90% on MNIST is lower than previously published results. No further details are given on the exact setting of the experiment.
0yNguO_G2aycf
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
[ "Takashi Shinozaki", "Yasushi Naruse" ]
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer and enables robust and simultaneous leaning on multilayer neural network.
[ "feedforward supervisory signal", "competitive learning", "multilayered networks competitive", "multilayered networks", "novel", "multilayered neural networks", "supervisory signal", "classification", "new input", "input" ]
https://openreview.net/pdf?id=0yNguO_G2aycf
https://openreview.net/forum?id=0yNguO_G2aycf
VN79AqkrcIprJ
comment
1,392,677,700,000
Q-z4DUn8LpDam
[ "everyone" ]
[ "Shinozaki Takashi" ]
ICLR.cc/2014/workshop
2014
reply: Dear reviewer, Thank you for your comments. We really apologize to the insufficient description of the learning method. The proposed learning method does not use the label data directly, uses an input which generates the required label as the supervisor signal instead. So, both x_target (as input signal) and x_adv (as supervisory signal) are input vectors (for example, those are 28x28 grayscale image data in the first layer). The implication of the proposed learning rule described by Eqs.1 & 2 is mainly based on SOM & LVQ algorithm, and extended with the advance supervisory signal x_adv. If x_adv is considered as the input which represents the idealized answer, (x_adv – x_target) represents the correction of the weight update direction. Thus, the overall weight update direction with a learning coefficient eta is described as follows: (x_target + eta (x_adv – x_target) = eta x_adv + (1 - eta) x_target). It is located at the center part of Eq.2 (Eq.4 in the revised version), and corresponds to a gradient of the weight in backpropagation learning rule. The proposed learning rule extracts a gradient-like learning information from the feedforward signal. We extensively rewrote the learning method section in the revised version of the manuscript. The motivation of the proposed learning method is to develop a new supervised learning method which uses more feedforward oriented mechanism. The feedforward network condenses the input information through the feedforward process, meaning less amount of information in later layers. However, the back propagation learning algorithm uses the error information in the last layer for the learning of the whole network. The proposed learning method focuses to use rich input information in the early layer for the supervisory signal at the layer. We speculate that the proposed method could be extended to multiple for multiple advance inputs (for example, use 'red' and 'round shape' to learn 'apple'). We added a description about the motivation in the 'Conclusion' section. We also added the comparison with some of previous reports in the revised version for the clarity. Unfortunately, the proposed method is still in a primitive level, and does not have enough performance to compare with many previous reports with great results. We are currently trying to improve the performance of the proposed learning method. We added a result with more training set iterations in Fig.1(c) with 6.9 % error rate after 20 training set iterations. Since it is a rough result, we will update it later. Moreover, we reordered the structure of the manuscript as you suggested. The sections of 'Network structure' and 'Pre-training' are now located at just after introduction, following 'Learning method' section. We have uploaded the revised version of the manuscript on arXiv.
0yNguO_G2aycf
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
[ "Takashi Shinozaki", "Yasushi Naruse" ]
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer and enables robust and simultaneous leaning on multilayer neural network.
[ "feedforward supervisory signal", "competitive learning", "multilayered networks competitive", "multilayered networks", "novel", "multilayered neural networks", "supervisory signal", "classification", "new input", "input" ]
https://openreview.net/pdf?id=0yNguO_G2aycf
https://openreview.net/forum?id=0yNguO_G2aycf
lxPke1W3RDxQ6
review
1,392,677,820,000
0yNguO_G2aycf
[ "everyone" ]
[ "Shinozaki Takashi" ]
ICLR.cc/2014/workshop
2014
review: Dear reviewer, Thank you for your comments. The “advance input” x_adv is the feedforward supervisory input, and is processed as same way but just before the target input. The “advance input” produces required label output, and leave processed values as an aftereffect in the network. The “target input” x_target is processed with the decayed after effect. The key part of Eq.2 (Eq.4 in the revised version) is (eta x_adv + (1 - eta) x_target), which exhibits traditional competitive learning algorithm for the sum of two inputs (x_adv and x_target) with a proportion coefficient eta. Therefore, the proposed method does not use the label directly, but uses a typical input for the label as the supervisory signal. We extensively rewrote the learning method section. As you mentioned, our results has no clear improvement from many previous reports. We use the word “improvement” for the reduction of the error rate from the pre-training result. We removed the misleading sentence, and rewrote the first paragraph in the 'Conclusion' section. Moreover, we added a little bit improved result (up to 6.9% error rate) of the proposed method although the data has just one sample. We have also got a better result (3.8 % error rate) with different parameters of the network structure. We will update those results later. We have uploaded a revised version of the manuscript on arXiv.
0yNguO_G2aycf
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
[ "Takashi Shinozaki", "Yasushi Naruse" ]
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer and enables robust and simultaneous leaning on multilayer neural network.
[ "feedforward supervisory signal", "competitive learning", "multilayered networks competitive", "multilayered networks", "novel", "multilayered neural networks", "supervisory signal", "classification", "new input", "input" ]
https://openreview.net/pdf?id=0yNguO_G2aycf
https://openreview.net/forum?id=0yNguO_G2aycf
mmC3yAGS3zZrd
review
1,392,969,780,000
0yNguO_G2aycf
[ "everyone" ]
[ "Shinozaki Takashi" ]
ICLR.cc/2014/workshop
2014
review: We have uploaded revised version (unfortunately, the replacement requires a little more time). The network parameters were changed, and the error rate now slightly improved from the previous version. We are really apologize for the delayed update.
0yNguO_G2aycf
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
[ "Takashi Shinozaki", "Yasushi Naruse" ]
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer and enables robust and simultaneous leaning on multilayer neural network.
[ "feedforward supervisory signal", "competitive learning", "multilayered networks competitive", "multilayered networks", "novel", "multilayered neural networks", "supervisory signal", "classification", "new input", "input" ]
https://openreview.net/pdf?id=0yNguO_G2aycf
https://openreview.net/forum?id=0yNguO_G2aycf
Q-z4DUn8LpDam
review
1,391,842,740,000
0yNguO_G2aycf
[ "everyone" ]
[ "anonymous reviewer 9d3c" ]
ICLR.cc/2014/workshop
2014
title: review of Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks review: This paper proposed a new learning algorithm for feedforward neural networks that is motivated by Self Organizing Maps (SOM) and Learning Vector Quantization (LVQ). The paper proposes a way for unsupervised pre-training of network weights followed by a supervised fine-tuning. The paper lacks discussions of clear motivation behind the work, i.e. shortcomings of existing training methods and how the proposed method overcome them. The description of the proposed method still needs a lot of work. More details are needed to clarify the proposed system. For example, in equation (1), d and sigma are not defined. For x_adv and x_target, which one is the input and which one is the label. What are the motivations for the update rule in equation (1) and equation (2)?. Pre-training and SOM are described for the first time in the experiment section while they should appear in earlier sections of the paper. The authors only experiment with MNIST and don’t compare their systems to other good baseline systems on MNIST.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
JJbRoZKlW6fQt
review
1,390,037,040,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Daniel Povey" ]
ICLR.cc/2014/workshop
2014
review: It would be helpful if you clarify the meaning of the x-axis 'minibatches' in your plots. It's not clear whether, in experiments with N GPUs, you are processing N times as many data points per minibatch. In earlier graphs, I assumed no but in later graphs it looked like the other way around.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
m4KEFoJGxcmDU
review
1,390,867,200,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2014/workshop
2014
review: In general, I think it would make more sense to report test and training errors (y-axis) versus time (x-axis). This is what we are interested in when we try to speed up convergence, not how many weight updates or samples we process. Since all your experiments use the same kind of GPU, the comparison is fair. Questions: a) have you tried to synchronize even more frequently (n_sync=1/10/50)? b) is every node on a different server? If not, do you leverage the fact that communication can be less costly when boards are on the same server? Thank you.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
vQUSuhGp0Mvja
review
1,390,283,940,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello Daniel, Yes. We did not state this explicitly, but in our plot, we are plotting the training error for one client in our ASGD system. And on average the overall ASGD system sees N times as many data points per minibatch. We plotted our error vs minibatches instead of time because time is very dependent on the GPU used to perform training, e.g. using Titan cards instead of a Tesla K20Xs can significantly shorten training time.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
cBv9AZMeK_O3x
review
1,390,867,200,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2014/workshop
2014
review: In general, I think it would make more sense to report test and training errors (y-axis) versus time (x-axis). This is what we are interested in when we try to speed up convergence, not how many weight updates or samples we process. Since all your experiments use the same kind of GPU, the comparison is fair. Questions: a) have you tried to synchronize even more frequently (n_sync=1/10/50)? b) is every node on a different server? If not, do you leverage the fact that communication can be less costly when boards are on the same server? Thank you.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
JxHIWtr0U5xjb
comment
1,391,228,280,000
m4KEFoJGxcmDU
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
reply: Hi Marc, Thanks for reading the paper. Yes, I think the plots you suggest make sense. We thought our minibatch measure was sensible, but we want our plots to be as clear and useful as possible so we will change them for the final version of the paper. a) At the time of publication we didn't try more frequent updates, but we have been trying those recently. b) On Bluewaters, every server has 1 GPU node. So we weren't able to leverage same board communication, but that would be great.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
80DMXtygJV8Tm
comment
1,391,657,880,000
OOTVKwQ9I7HkZ
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
reply: Hi Liangliang, Thanks for reading. And yes, during submission, the fields auto-populated in the wrong order. Prof Huang is the last author. I am the first. Sorry for the confusion. All the figures plot the error on training minibatches of 128 images. The plots show the error for minibatches on one client. Plots are comparable across clients. Due to time constraints we didn't change learning rates on these later experiments. But instead focused on initial training speed increases. We also measured on a validation set, and found that for these settings we see similar gains in validation set performance.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
OOTVKwQ9I7HkZ
review
1,391,646,480,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Liangliang Cao" ]
ICLR.cc/2014/workshop
2014
review: Interesting work. And it is also amusing to see the authorlist on this page. There may be a typo but from my understanding of the authors I believe the first author (Prof. Huang) did all the GPU programming and reported to the last author (Thomas Paine). One thing confuses me is how did you measure the training error in Figure 2-4. Are these numbers from the whole training set (1.2M) or a batch? Did you change learning rate? Or measure on the validation set? Another confuse which is totally my fault: at the beginning I thought A-SGD stands for Average SGD!
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
CC39DxlVuyfI0
review
1,392,065,340,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "anonymous reviewer 6693" ]
ICLR.cc/2014/workshop
2014
title: review of GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training review: Multi-computer GPU training of large networks is an important current topic for representation learning in industry-scale convnets. This paper describes ongoing efforts to combine model parallelism and data parallelism to reduce training time on the ILSVRC 2012 data set. Pro: - they achieve several-fold reductions in runtime over Khrizhevsky's landmark implementation Con: - I'm not sure that there is significant novelty in their approach, relative to dist-Belief and existing work on asynchronous-SGD.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
-xjY-GMQtwuQN
review
1,391,821,200,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "anonymous reviewer 4f82" ]
ICLR.cc/2014/workshop
2014
title: review of GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training review: Summary ------------ The paper explores running A-SGD as an approach for speeding up learning. Overall I think these are very interesting and informative results. Specifically for a workshop paper I believe the paper contains enough novelty and empirical exploration. Comments: -------------- It would be interesting to try to quantify how much the size of the model influences these results. In particular I'm wondering of how the performance drops with the size of the gradients that need to be send over the network. Another interesting plot will be to look at the size of the minibatch and how that influence convergence. I hypothesis that distributed algorithms where the parallelism is made over the data (rather than model), like it is done here at the node level, will benefit a lot more from complicated optimization techniques rather that SGD (even in its asynchronous version). It feels to me that with large models there is a high price to pay for sending the gradients over the network (case an point, n_sync is usually set to something higher than 1). We want to use an algorithm for which each step is itself expensive (and hence we have to send fewer gradients over the network) but that needs much less steps to converge. You can make each step mSGD arbitrarily expensive by increasing the minibatch size, though SGD is fairly inefficient at utilizing these large minibatches. I believe that distributing computation along data for deep models makes a lot more sense with algorithms such as second order methods or variants of natural gradient.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
kk4_Fauz_DkzE
review
1,390,287,000,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello reviewers, We would like to bring your attention to a similar paper submitted to this ICLR workshop track: Title: Multi-GPU Training of ConvNets Link: http://openreview.net/document/bbc93764-4f15-4ba5-b092-86dc80b727c7#bbc93764-4f15-4ba5-b092-86dc80b727c7 Both papers explore using many GPUs for training convnets in using an ASGD framework. In theirs, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD). In ours a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use. Ours work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs. We bring this up because one of their reviewers has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing. Best, Tom
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
R1DX12tB5P1bg
review
1,390,286,940,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello reviewers, We would like to bring your attention to a similar paper submitted to this ICLR workshop track: Title: Multi-GPU Training of ConvNets Link: http://openreview.net/document/bbc93764-4f15-4ba5-b092-86dc80b727c7#bbc93764-4f15-4ba5-b092-86dc80b727c7 Both papers explore using many GPUs for training convnets in using an ASGD framework. In theirs, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD). In ours a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use. Ours work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs. We bring this up because one of their reviewers has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing. Best, Tom
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
BUh4cSvQWDBQi
review
1,390,867,200,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2014/workshop
2014
review: In general, I think it would make more sense to report test and training errors (y-axis) versus time (x-axis). This is what we are interested in when we try to speed up convergence, not how many weight updates or samples we process. Since all your experiments use the same kind of GPU, the comparison is fair. Questions: a) have you tried to synchronize even more frequently (n_sync=1/10/50)? b) is every node on a different server? If not, do you leverage the fact that communication can be less costly when boards are on the same server? Thank you.
kSHxSr1TPt8XB
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
[ "Hailin Jin", "Thomas Huang", "Zhe Lin", "Jianchao Yang", "Thomas Paine" ]
The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
[ "gpu", "neural network", "computer vision", "training", "data parallelism", "possible", "ability", "neural networks" ]
https://openreview.net/pdf?id=kSHxSr1TPt8XB
https://openreview.net/forum?id=kSHxSr1TPt8XB
EnLfESm5kXnRD
review
1,392,928,680,000
kSHxSr1TPt8XB
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: We would like to thank the reviews for their comments. To Anonymous 4f82: Thank you for the comments. All your points are good ones. Exploring the effect of model size, and minibatch size vs performance is important. We will look into this for future work. We also agree that second order methods could be a great help here. To Anonymous 6693: We agree that our work builds directly on recent developments in high performance neural network training. We would like to emphasize our contribution is exploring the benefits of combining these approaches, and making the results available to the community. To date no group has published results on GPUs, and distributed computing with neural networks of this scale. And we think overall this is a very promising direction. Thank you.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
ttigSsrO9_tD7
review
1,390,861,140,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 6331" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: The paper presents a cascaded architecture that successively transforms an input representation into a sparse code, and then transforms the sparse code into a compact dense representation, making sure adjacent patches get similar dense representations. The resulting representations are passed through a spatial pyramid pooling mechanism and concatenated, and then fed to a linear SVM to learn to classify images. This architecture achieves better or similar results to other sparse coding approaches on 3 small image datasets. The paper reads quite well (the first introductory sections are really nice summary of past work in the sparse coding and dimensionality reduction domains). I like the idea of making sure that two sparse codes encoding overlapping regions should have a similar dense code. That said, I wonder why we need to use intermediate representations as input to the final SVM, and not just the last layer. I found section 3.3 less interesting as it was an obvious result (to me at least), and section 3.4 daunting as it meant two more hyper-parameters to tune in our lives... Overall, I liked the paper and would have liked to see results on one larger image dataset: the caltech-xxx datasets are quite outdated, and I'm worried results would not scale to larger and more recent datasets.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
lcBTcK8gLGc6X
review
1,391,824,620,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 1704" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks. The paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well. Experiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results. There are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places. Overall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
PP0zWn2N3DPOl
review
1,391,824,440,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 1704" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: Unsup feature learning by deep sparse coding This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks. The paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well. Experiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results. There are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places. Overall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
kP1cPmbAG4825
review
1,391,824,500,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 1704" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: Unsup feature learning by deep sparse coding This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks. The paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well. Experiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results. There are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places. Overall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
XKgURjkon2Xoa
review
1,391,824,500,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 1704" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: Unsup feature learning by deep sparse coding This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks. The paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well. Experiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results. There are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places. Overall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
LSHWxogcsnSVZ
review
1,391,824,620,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 1704" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks. The paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well. Experiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results. There are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places. Overall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.
X4tT4azdE1XkU
Unsupervised Feature Learning by Deep Sparse Coding
[ "Yunlong He", "Arthur Szlam", "Yanjun Qi", "Yun Wang", "Koray Kavukcuoglu" ]
In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.
[ "unsupervised feature learning", "deep sparse coding", "framework", "deepsc", "module", "image", "deep sparse", "new unsupervised feature", "sparse", "architecture" ]
https://openreview.net/pdf?id=X4tT4azdE1XkU
https://openreview.net/forum?id=X4tT4azdE1XkU
nh6qniypdC210
review
1,391,824,620,000
X4tT4azdE1XkU
[ "everyone" ]
[ "anonymous reviewer 1704" ]
ICLR.cc/2014/workshop
2014
title: review of Unsupervised Feature Learning by Deep Sparse Coding review: This paper alternates sparse-to-dense (dimensionality reduction by learning an invariant mapping: DRLIM) and dense-to-sparse (standard sparse coding) modules to produce a multi-layer image representation of images. Compared to earlier deep image recognition architectures using sparse modules and pooling modules, the proposed system is more general and displays better performance on image recognition benchmarks. The paper itself is clear and well-motivated. While none of the building blocks is new (DRLIM blocks, sparse coding blocks, etc), the combination is novel and works well. Experiments on Caltech 101, Caltech 256, and Scenes 15 show that the new architecture performs better than earlier versions without this sparse-to-dense mapping. These are bit limited now that better datasets are widely used (e.g., imagenet), but the experiments are interesting and show that the new system allows for deeper training without increasing the dimension of dictionary used for sparse coding. I think other researchers would be interested in these results. There are a few writing problems (e.g. 'it is important to emphasis' instead of 'emphasize') so please spell check, but this does not hinder understanding. Also, it would be simpler to write 'First' rather than 'first of all' as is done in several places. Overall this paper presents a more principled variant for a deep image recognition architecture. The datasets used are limited but show that this system has promise.
Hy_7-edzrEHx9
Relaxations for inference in restricted Boltzmann machines
[ "Sida I. Wang", "Roy Frostig", "Percy Liang", "Christopher D. Manning" ]
We propose a relaxation-based approximate inference algorithm that samples near-MAP configurations of a binary pairwise Markov random field. We experiment on MAP inference tasks in several restricted Boltzmann machines. We also use our underlying sampler to estimate the log-partition function of restricted Boltzmann machines and compare against other sampling-based methods.
[ "inference", "restricted boltzmann machines", "relaxations", "approximate inference algorithm", "configurations", "map inference tasks", "sampler" ]
https://openreview.net/pdf?id=Hy_7-edzrEHx9
https://openreview.net/forum?id=Hy_7-edzrEHx9
mL6rEPWYehLYQ
review
1,392,869,340,000
Hy_7-edzrEHx9
[ "everyone" ]
[ "Sida Wang" ]
ICLR.cc/2014/workshop
2014
review: My reviewer response. We thank the reviewers for the comments and questions. Issues on using rrr to estimating the partition function: we would agree with both reviewers that rrr is not necessarily good at estimating partition functions, unless the partition function is dominated by the MAP states. In the bipartite RBM case, this requirement is less restrictive since the partition function only needs to be dominated by a MAP visible state summing over hidden, or a MAP hidden state summing over visible units. We thought it is appealing to try rrr for this task since rrr gives us a distribution as well. However, we would only recommend its use for partition function estimation in these specific cases. This should be clarified further in a revision. ***Reviewer 1 questions*** > Figure 1... Is there any particular reason for this? Also, what would the result be if the Gibbs sampling were performed at different temperatures? One hypothesis is that the RBM with learned weights has negative weights in expectation. So any random deviation from the optimum tend to have worse likelihood, causing the more spread distribution. In the random case, the mean weight is 0, and random derivations only cause variance. However, we do not understand the rounding distribution very well. It could be a hard problem since complexity theory rule out strong generic lower bounds on the variance of the rounding distribution. At lower temperatures, Gibbs tend to sample more near the MAP, provided that it still mixes. This is why we would compare to annealed Gibbs in our MAP finding exercise. > Since the optimization problem itself is non-convex, it would seem that the results would also depend on the quality of the solution to the optimization problem. Is there much variance between optimization runs? What about the effect of the rank of X? We'd like to note that the full SDP is convex, and can be solved for problems of this size (and we've tried that as well, with no performance difference with local low rank solution). There is much literature with theoretical and empirical evidence supporting that low rank solutions here are suitable and stable. Empirically there is very little variance between runs and using higher rank quickly lead to diminished returns. > Compare to Gurobi on small instances, graph-cut cases rrr usually solve small instances exactly as well. This comparison is a helpful one which we neglected here. Comparing to exact methods in the sub-modular case can also be very helpful, which we also neglected. Thanks for the suggestion. ***Reviewer 2 questions*** >It would be very interesting to try and estimate the partition function of an RBM with many more modes, and to compare it with other methods (such as AIS). In some later work, we tried DBM trained on MNIST. Can the reviewer point us to the example with many more modes? We compared to AIS ourselves, the results is that with enough time budget, AIS does better and there exists time budgets under which rrr does better. However, rrr fundamentally does not give an unbiased estimate of the log partition and this comparison was omitted. > The approximation of (14) can be quite bad if p_X is very different from the RBM's distribution: indeed, and unlike actual, asymptotically unbiased, methods of estimating the partition function. rrr is fundamentally a MAP finding method that can only heuristically estimate the partition function. As figure 1 shows, samples from p_X can indeed be very different. > other energy based models, future works? In current work, we tried rrr for MRF inference, among them DBNs and we gave some theoretical analysis. Thanks for the contrastive backprop suggestion.
Hy_7-edzrEHx9
Relaxations for inference in restricted Boltzmann machines
[ "Sida I. Wang", "Roy Frostig", "Percy Liang", "Christopher D. Manning" ]
We propose a relaxation-based approximate inference algorithm that samples near-MAP configurations of a binary pairwise Markov random field. We experiment on MAP inference tasks in several restricted Boltzmann machines. We also use our underlying sampler to estimate the log-partition function of restricted Boltzmann machines and compare against other sampling-based methods.
[ "inference", "restricted boltzmann machines", "relaxations", "approximate inference algorithm", "configurations", "map inference tasks", "sampler" ]
https://openreview.net/pdf?id=Hy_7-edzrEHx9
https://openreview.net/forum?id=Hy_7-edzrEHx9
eeNW4HiDfE4DT
review
1,391,848,560,000
Hy_7-edzrEHx9
[ "everyone" ]
[ "anonymous reviewer caba" ]
ICLR.cc/2014/workshop
2014
title: review of Relaxations for inference in restricted Boltzmann machines review: This paper introduces an approach to finding near-MAP solutions in binary Markov random fields. The proposed technique is based on an SDP relaxation that is re-parameterized and solved using constrained gradient-based methods. The final step involves projecting the solution using a random unit-length vector and then rounding the resulting entries to the vertices of a hypercube. This stochastic process defines a sampler that empirically produces lower-energy configurations than Gibbs sampling. The method is simple and seems to perform well for approximate MAP estimation, although it is not clear whether this approach will be useful for estimating the partition function. I liked the result in Figure 1, although the entropy of the rrr-MAP method is much higher in the learned RBM than the one with random weights. Is there any particular reason for this? Also, what would the result be if the Gibbs sampling were performed at different temperatures? Since the optimization problem itself is non-convex, it would seem that the results would also depend on the quality of the solution to the optimization problem. Is there much variance between optimization runs? What about the effect of the rank of X? I think that the results should also be compared on a small RBM where the exact MAP solution can be found in a reasonable amount of time by Gurobi. Perhaps a good test would be on RBMs (or general binary MRFs) with non-negative edge weights. These are submodular and can therefore be globally optimized efficiently. This would serve as a good basis for comparison to other local-search methods.
Hy_7-edzrEHx9
Relaxations for inference in restricted Boltzmann machines
[ "Sida I. Wang", "Roy Frostig", "Percy Liang", "Christopher D. Manning" ]
We propose a relaxation-based approximate inference algorithm that samples near-MAP configurations of a binary pairwise Markov random field. We experiment on MAP inference tasks in several restricted Boltzmann machines. We also use our underlying sampler to estimate the log-partition function of restricted Boltzmann machines and compare against other sampling-based methods.
[ "inference", "restricted boltzmann machines", "relaxations", "approximate inference algorithm", "configurations", "map inference tasks", "sampler" ]
https://openreview.net/pdf?id=Hy_7-edzrEHx9
https://openreview.net/forum?id=Hy_7-edzrEHx9
JM28f-ItaBfNF
review
1,391,904,120,000
Hy_7-edzrEHx9
[ "everyone" ]
[ "anonymous reviewer e306" ]
ICLR.cc/2014/workshop
2014
title: review of Relaxations for inference in restricted Boltzmann machines review: The paper introduces a gradient procedure for map estimation in MRFs and RBMs, which can also be used to draw approximate samples and therefore estimate partition functions. Pros: the method is very novel in the context of RBMs, and it seems to work quite well, beating Gibbs-sampling almost every time. It is also useful for estimating partition functions. Cons: the method was able to correctly estimate the partition function of an MNIST RBM well. But MNIST has few modes. It would be very interesting to try and estimate the partition function of an RBM with many more modes, and to compare it with other methods (such as AIS). The approximation of (14) can be quite bad if p_X is very different from the RBM's distribution. I wonder if this method can be applied to general energy-based models, like the ones used in contrastive backpropagation.
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
PPkvPCYirqPUb
review
1,391,030,520,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2014/workshop
2014
review: Thank you, Tom. The main difference between this work and yours is that our data parallelism framework is synchronous (i.e., we use SGD not A-SGD). Also, all our experiments refer to a set up where all the GPU boards reside in the same server. In the future, we will extend this work to multiple servers and A-SGD.
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
22eX5RjNOqpg1
review
1,389,837,600,000
zze5zJIRq7lRt
[ "everyone" ]
[ "anonymous reviewer 3960" ]
ICLR.cc/2014/workshop
2014
title: review of Multi-GPU Training of ConvNets review: The paper is about various ways of training convolutional neural networks (CNNs) using multiple GPUs attached to the same machine. I think it is sufficiently interesting for the conference track. The authors may not be aware of all relevant prior work, but they can fix this easily. I think the paper should definitely be accepted because Facebook is growing in this area right now, and conference-goers will be wanting to talk to the presenters about what's going on there and what opportunities there are. There are a couple of papers I think the authors should be aware of; the titles are 'Asynchronous stochastic gradient descent for DNN training' 'Pipelined Back-Propagation for Context-Dependent Deep Neural Networks' Also I know that Andrew Ng's group was doing some work on model parallelism for CNNs. Andrew Maas (Andrew Maas <[email protected]>) would be able to tell you who it was and forward any relevant presentations.
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
n_6j_UpOmw_Od
review
1,390,287,120,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello reviewers, We would like to bring your attention to a similar paper my colleagues and I submitted to this ICLR workshop track: Title: GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training Link: http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667 Both papers explore using many GPUs for training convnets in using an ASGD framework. In this paper, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD). In ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use. Ours work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs. We bring this up because a reviewer has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing. Best, Tom
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
FgxqTOu1qBF1I
review
1,391,638,500,000
zze5zJIRq7lRt
[ "everyone" ]
[ "anonymous reviewer 95e3" ]
ICLR.cc/2014/workshop
2014
title: review of Multi-GPU Training of ConvNets review: Problem is clearly important, but paper is light on details, data sets, which gpu's, etc. All such things matter when judging the speed-up. For example, if you used an older gpu, it's easier to get a speed-up because the trade-off between the gain of multiple gpu's vs. the communication overhead is clearly different. CNN's for audio processing was done in 2012 by Abdel-Hamid. I would recommend to include this reference: Abdel-Hamid, Ossama, et al. 'Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition.' Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, 2012. Multi-GPU architectures for non-convolutional networks were discussed in: Xie Chen, Adam Eversole, Gang Li, Dong Yu, and Frank Seide, Pipelined Back-Propagation for Context-Dependent Deep Neural Networks, in Interspeech, ISCA, September 2012 I don't really see things that are new. Model and Data parallelization was tried in Chen'2012, and the extension for CNN's are obvious. Also, which layer to parallelize depends really on the network structure. For example, if you have a very large output layer with 128k nodes, you might be better off parallelizing the output layer.
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
oV4tZMH-QOols
review
1,390,287,360,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello everyone, We would like to bring your attention to a similar paper my colleagues and I submitted to this ICLR workshop track: Title: GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training Link: http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667 Both papers explore using many GPUs for training convnets in using an ASGD framework. In this paper, they use 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD). In ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use. Ours work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs. Best, Tom
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
A50KACiEo4AE6
review
1,391,647,980,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Liangliang Cao" ]
ICLR.cc/2014/workshop
2014
review: This is a light but interesting paper. I guess we are seeing a 'baby' version of Facebook's deep learning infostructure. First an easy-to-fix point: I didn't find explicitly how many layers are there in the deep NN, and which dataset is used. I guess the answers are 7 layers and ImageNet'12? Currently the results are very reasonable: 2-GPU version is 1.6 times faster than 1-GPU. But I guess the audience is more interesting in the performance with more GPUs. Could 20 GPU be 16 times faster than 1GPU? What if 50, or 100GPUs? Scalability may also bring interesting insights in the model design. By the use of model parallelism, I wonder whether we can build an larger CNN with more neurals. It may have more convolutional filters in each layer, and could process larger image like 1024 * 1024 * 3. I wonder whether ensemble learning as well as sparse models will be useful in such a big neural network. Hope to see more updates following the current submission.
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
YlXrYpUzr3h7n
review
1,390,287,240,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello authors, I would like to bring your attention to a similar paper my colleagues and I submitted to this ICLR workshop track: Title: GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training Link: http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667 Both papers explore using many GPUs for training convnets in using an ASGD framework. In your paper, you use 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD). In ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization you use. Ours work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs. Best, Tom
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
guIXuVCQXMuQh
review
1,392,782,340,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2014/workshop
2014
review: We thank the reviewers for their comments and suggestions. In this abstract, we limit the investigation to: + the use of multiple GPUs all residing in the same server + the architecture and the task as defined in Krizhevsky et al. NIPS 2012 + the use of regular synchronous stochastic gradient descent. We clarified this in the revised version of the paper. We have also added references to prior work as recommended. However, notice the following major differences: — Krizhevsky et al. and Coates et al. only considered model parallelism — Chen et al, Dean et al. and Zhang et al. used different variants of asynchronous SGD The objective of this study is to determine the speed up of a popular model using the most straightforward parallelization techniques without changing the optimization method. This should serve as a baseline comparison for any more advanced parallelization method. It will be avenue of future work the study of asynchronous approaches using multiple servers. We have updated the draft accordingly (the new version should appear shortly). Thank you very much.
zze5zJIRq7lRt
Multi-GPU Training of ConvNets
[ "Omry Yadan", "Keith Adams", "Yaniv Taigman", "Marc'Aurelio Ranzato" ]
In this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
[ "training", "convnets", "work", "different approaches", "computation", "convolutional neural networks", "several gpus" ]
https://openreview.net/pdf?id=zze5zJIRq7lRt
https://openreview.net/forum?id=zze5zJIRq7lRt
3fpu-K60iD30c
review
1,390,287,120,000
zze5zJIRq7lRt
[ "everyone" ]
[ "Thomas Paine" ]
ICLR.cc/2014/workshop
2014
review: Hello reviewers, We would like to bring your attention to a similar paper my colleagues and I submitted to this ICLR workshop track: Title: GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training Link: http://openreview.net/document/a4a87af0-ce63-450d-9d4b-41cfb0390667#a4a87af0-ce63-450d-9d4b-41cfb0390667 Both papers explore using many GPUs for training convnets in using an ASGD framework. In this paper, they try using 2 GPUs on one machine for model parallelization (similar to Alex Krizhevsky's NIPS 2012 paper), as well as 2 and 4 nodes for data parallelization (ASGD). In ours, a single GPU is used for model parallelization, but many nodes are used for data parallelization (ASGD). The ASGD methods are similar and our method is compatible with the model parallelization they use. Ours work has additional experiments that explore how to tune ASGD to get the best performance with GPUs, and how this scales to as many as 32 GPUs. We bring this up because a reviewer has recommend their paper for the Conference track, though they submitted to the workshop track. Since the papers have a lot of overlap we think it would be best to compare them on the same footing. Best, Tom
iiu7beeAJGnAl
Deep Learning Embeddings for Discontinuous Linguistic Units
[ "Wenpeng Yin", "Hinrich Schütze" ]
Deep learning embeddings have been successfully used for many natural language processing (NLP) problems. Embeddings are mostly computed for word forms although a number of recent papers have extended this to other linguistic units like morphemes and phrases. In this paper, we argue that learning embeddings for discontinuous linguistic units should also be considered. In an experimental evaluation on coreference resolution, we show that such embeddings perform better than word form embeddings.
[ "embeddings", "discontinuous linguistic units", "deep learning embeddings", "learning embeddings", "nlp", "problems", "word forms", "number", "recent papers" ]
https://openreview.net/pdf?id=iiu7beeAJGnAl
https://openreview.net/forum?id=iiu7beeAJGnAl
GzFZGSMOZpzOk
review
1,391,716,200,000
iiu7beeAJGnAl
[ "everyone" ]
[ "anonymous reviewer e7ba" ]
ICLR.cc/2014/workshop
2014
title: review of Deep Learning Embeddings for Discontinuous Linguistic Units review: This paper explores simple ways to embed linguistic units composed of discontiguous words such as 'HELP TO' in the sentence 'Paul HELPS me TO write my paper'. The frequency of occurrence of such discontiguous units is very language dependent (high in German, lower in English). The authors propose a method that essentially amounts to rewriting the sentence in a manner that considers such units as a single word and using Mikolov's vec2word code. Experiments show that such embeddings perform better on a simple task, namely classifying entities are animated or non-animated. In my opinion this is a very preliminary work at this stage. Neither the claim not the results are very surprising.
iiu7beeAJGnAl
Deep Learning Embeddings for Discontinuous Linguistic Units
[ "Wenpeng Yin", "Hinrich Schütze" ]
Deep learning embeddings have been successfully used for many natural language processing (NLP) problems. Embeddings are mostly computed for word forms although a number of recent papers have extended this to other linguistic units like morphemes and phrases. In this paper, we argue that learning embeddings for discontinuous linguistic units should also be considered. In an experimental evaluation on coreference resolution, we show that such embeddings perform better than word form embeddings.
[ "embeddings", "discontinuous linguistic units", "deep learning embeddings", "learning embeddings", "nlp", "problems", "word forms", "number", "recent papers" ]
https://openreview.net/pdf?id=iiu7beeAJGnAl
https://openreview.net/forum?id=iiu7beeAJGnAl
AKHVlzzUO5Ao2
review
1,392,649,080,000
iiu7beeAJGnAl
[ "everyone" ]
[ "Wenpeng Yin" ]
ICLR.cc/2014/workshop
2014
review: We were happy to hear that the reviewer thinks that our idea of inducing representations for disjoint linguistic units is novel and holds potential! 1)'strange to use word2vec' Our motivation for word2vec was to use the best currently available method for distributed representations. Embeddings perform better than other distributed representations on several tasks and embeddings induced by word2vec have been particularly successful. We would appreciate further thoughts on why we should use word-document matrix factorization as opposed to a stronger method like word2vec for learning representations. 2)'skip-gram learning algorithm doesn’t make sense' We would appreciate if the reviewer could expand on this point. Is the reason that representations of verbs should, in the reviewer's view, be induced using a 'sequence-sensitive' learning algorithm (since bag-of-words is often viewed as more appropriate for nouns)? This is a good point. However, the skip-gram model seems to be successful to some extent in learning sequence-dependent information, possibly because the sampling is position dependent, giving a preference to close words. For example, singular and plural forms are systematically related, which one would not expect from a true bag-of-words model. 3) 'why this task is chosen is unclear' As we discuss in the paper, the task of predicting (human) animacy from context is useful for coreference resolution (because pronouns like 'him' and 'she' can only refer to human animate entities). We agree though that we should also present results for a standard task. We are currently running experiments on paraphrase identification: http://www.aclweb.org/aclwiki/index.php?title=Paraphrase_Identification_%28State_of_the_art%29 and will present results at the workshop if the paper gets accepted. 4) 'paper is short' We were trying to comply with the length restrictions of the ICLR 2014 workshop track. If the reviewer could be more specific as to which parts of the description of the evaluation task and of the conclusion need to be expanded, we would be very glad to fix these problems and submit a revised version to arxiv. 5) 'No visualization or control experiments to understand the learned representations' Our control experiment was supposed to be the single-word baseline. A visualization would certainly improve the paper. Again, we were trying to comply with the length restrictions.
iiu7beeAJGnAl
Deep Learning Embeddings for Discontinuous Linguistic Units
[ "Wenpeng Yin", "Hinrich Schütze" ]
Deep learning embeddings have been successfully used for many natural language processing (NLP) problems. Embeddings are mostly computed for word forms although a number of recent papers have extended this to other linguistic units like morphemes and phrases. In this paper, we argue that learning embeddings for discontinuous linguistic units should also be considered. In an experimental evaluation on coreference resolution, we show that such embeddings perform better than word form embeddings.
[ "embeddings", "discontinuous linguistic units", "deep learning embeddings", "learning embeddings", "nlp", "problems", "word forms", "number", "recent papers" ]
https://openreview.net/pdf?id=iiu7beeAJGnAl
https://openreview.net/forum?id=iiu7beeAJGnAl
fC0gfomYgEfhk
review
1,391,850,600,000
iiu7beeAJGnAl
[ "everyone" ]
[ "anonymous reviewer 6104" ]
ICLR.cc/2014/workshop
2014
title: review of Deep Learning Embeddings for Discontinuous Linguistic Units review: Summary This paper proposes learning representations for discontinuous pairs of words in a sentence. Representations for such linguistic units such as “helped*to” are potentially more useful than bigrams or other units for particular NLP tasks. Rather than introducing a new algorithm to induce such representations, they alter a text corpus and use a skip-gram training algorithm. Representations are compared against previous word representation approaches on a task of classifying markables. Review Generally the idea of inducing representations for disjoint linguistic units is novel, and seems to hold good potential. It seems strange to use word2vec which is a skip-gram algorithm to induce such representations. The process of creating fake ‘sentences’ with disjoint units to induce skip grams seems hacky. I would prefer to see a more straightforward approach, such as one based on token-document matrix factorization, to induce representations for the disjoint tokens. The evaluation task is obscure, and why this task is chosen is unclear. The authors should include experimental evaluation, visualization, or controlled experiments on at least one more standard task. Generally, there is a kernel of an interesting idea in this paper but the work needs a more thorough investigation into the representation learning algorithm used and evaluation. Key points + Interesting linguistic idea + Use of pre-existing word vector learning package makes experiments seemingly easy to reproduce - Using a skip-gram learning algorithm doesn’t make sense. A matrix factorization or other similar approach seems more natural - Non-standard and somewhat difficult to understand evaluation - No visualization or control experiments to understand the learned representations - Paper is short to the point of lacking sufficient descriptions of the evaluation task and conclusions
iiu7beeAJGnAl
Deep Learning Embeddings for Discontinuous Linguistic Units
[ "Wenpeng Yin", "Hinrich Schütze" ]
Deep learning embeddings have been successfully used for many natural language processing (NLP) problems. Embeddings are mostly computed for word forms although a number of recent papers have extended this to other linguistic units like morphemes and phrases. In this paper, we argue that learning embeddings for discontinuous linguistic units should also be considered. In an experimental evaluation on coreference resolution, we show that such embeddings perform better than word form embeddings.
[ "embeddings", "discontinuous linguistic units", "deep learning embeddings", "learning embeddings", "nlp", "problems", "word forms", "number", "recent papers" ]
https://openreview.net/pdf?id=iiu7beeAJGnAl
https://openreview.net/forum?id=iiu7beeAJGnAl
azLZaQQbZvaIp
comment
1,392,648,720,000
GzFZGSMOZpzOk
[ "everyone" ]
[ "Wenpeng Yin" ]
ICLR.cc/2014/workshop
2014
reply: 1)'frequency of ... discontiguous units' It may be the case that discontiguous units are more frequent in other languages. However, phrasal verbs are one of the most important verb groups in English, verbs like 'keep up' and 'take off'. Without discontiguous units, the 'vacation' meaning of 'I took the month off' is difficult to infer from the vectors of 'took' and 'off'. Our approach learns a vector for 'took ... off', thus facilitating correct inference. This shows that having appropriate representations for phrasal verbs is important. 2) 'Neither the claim not the results are very surprising.' We certainly agree that our results are not earth shattering. However, for all NLP work that represents linguistic input as embeddings or other distributed representations, the question of how the linguistic input should be parsed into units (which then are represented as vectors) must be addressed. We cite recent work on morphology and on phrases in this vein. Within this line of work research on discontinuous units (as a third type of possible unit, in addition to stems/affices and continuous phrases) is well motivated. Perhaps positive results for this new type of unit are to be expected. Still, we feel it is a contribution to confirm the hypothesis that they are useful.
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
ggC0Jddw0PgjR
comment
1,392,658,680,000
O3dl_qqbXKYbU
[ "everyone" ]
[ "Irina Sergienya" ]
ICLR.cc/2014/workshop
2014
reply: Dear reviewer, Thank you for your review and comments! Your idea to initialize the embeddings with zero is interesting, but in the word2vec setup we are using embeddings initialized as zero remain zero vectors during training. We confirmed this in an experiment in which we initialized the vectors as zero vectors. We would like to ask whether this is the setting you were suggesting or did you have something else in mind? Regards, Irina
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
wvfSwIGySBvdV
review
1,392,824,460,000
IgLmlBsymQnyP
[ "everyone" ]
[ "Irina Sergienya" ]
ICLR.cc/2014/workshop
2014
review: We uploaded a new version of paper with the Related work section, where a relevant work of Hai Son Le et al. (EMNLP2010) is discussed. Thanks to the reviewer who pointed out this paper. Regards, Irina
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
O3dl_qqbXKYbU
review
1,390,789,980,000
IgLmlBsymQnyP
[ "everyone" ]
[ "anonymous reviewer 4d8c" ]
ICLR.cc/2014/workshop
2014
title: review of Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds review: This paper investigates the use of so-called distributional models of words to improve the quality of learned word embeddings. These models are essentially high-dimensional vector representations of words, which indicate with what other words in the vocabulary has a word cooccurred (within some context). Since word embeddings can be understood as a linear lower-dimensional projection of the basic one-hot representations of words, these distributional model representations can be exploited by trainable word embeddings algorithms by simply concatenating them to the basic 'one-hot' representation or by replacing the one-hot representation by the distributed one for infrequent words. This paper shows that this approach can improve the quality of the word embeddings, as measured by an average correlation with human judgement of word similarity. Overall, I think this is a good workshop paper. Learning good words embeddings is an important topic of research, and this paper describes a nice, fairly simple trick to improve word embeddings. One motivation for using it seems to be that infrequent words will not be able to move far enough away from their random initialization, so I wonder whether initializing all these word embeddings to 0 instead might have been enough to solve this problem. I think this would be a good baseline to compare with. I would also have liked to see whether any gains are obtained in a real NLP task, i.e. confirm that better correlation with human judgement is actually giving us something in practice. But these are overall minor problems, for a workshop paper.
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
pZNfKh1T62ZEM
comment
1,393,554,060,000
ggC0Jddw0PgjR
[ "everyone" ]
[ "anonymous reviewer 4d8c" ]
ICLR.cc/2014/workshop
2014
reply: This is indeed what I had in mind... And indeed, in the word2vec setup, they'll stay at 0 (because all gradients will be zero, in the word2vec parametrization, which isn't necessarily the case in, say, a neural network language model)... I had not realized this.
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
FdstFZk6tuIGQ
comment
1,392,674,520,000
K9CR9nnAg6-4W
[ "everyone" ]
[ "Irina Sergienya" ]
ICLR.cc/2014/workshop
2014
reply: Dear reviewer, Please, find our reply for your comments below.
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
f4NY4EleY1FeR
review
1,392,674,280,000
IgLmlBsymQnyP
[ "everyone" ]
[ "Irina Sergienya" ]
ICLR.cc/2014/workshop
2014
review: Dear reviewer, Thank you for your review and comments! Indeed, there are systems with performance on MEN and WordSim higher than our numbers. Right now we are running experiments on a bigger training corpus. The preliminary numbers are much closer to the state-of-the-art performance. We will present results at the conference if the paper gets accepted. Thank you for pointing out the paper of Hai Son Le et al, which is very relevant. They propose three initialization schemes. Two of them, re-initialization and iterative re-initialization, use vectors from prediction space to initialize the context space during training. This approach is both more complex and less efficient than ours. The third initialization scheme, one vector initialization, initializes all word embeddings with the same random vector: this helps to keep rare words close to each other as an outcome of rare updates. However, this approach is also less efficient than ours since the initial embedding is much denser than in our approach. We are planning to run experiments with this approach and should be able to present results at the conference if the paper is accepted. Regards, Irina
IgLmlBsymQnyP
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
[ "Irina Sergienya", "Hinrich Schütze" ]
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
[ "deep learning embeddings", "distributional models", "best", "vectors", "worlds distributional models", "worlds", "main approaches", "distributed representation", "words", "dimension" ]
https://openreview.net/pdf?id=IgLmlBsymQnyP
https://openreview.net/forum?id=IgLmlBsymQnyP
K9CR9nnAg6-4W
review
1,391,787,900,000
IgLmlBsymQnyP
[ "everyone" ]
[ "anonymous reviewer 4683" ]
ICLR.cc/2014/workshop
2014
title: review of Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds review: This paper proposes to derive distributional representation for words that can be used to improve word embeddings. Distributional vectors can present to the neural network that learns embeddings, instead of presenting one-hot vectors. One motivation is that distributional representation could make the learning task easier for rare words. The authors apply this approach only to rare words since word embeddings for frequent words is frequently updated and then can be considered as satisfactory. The idea is nice. However, my main concern is about the experimental part. I don't understand the results. For the 'WordSim' task, the paper of E. Huang (ACL2012) exhibits spearman correlation above 50. So wether the results are incredibly below the baseline systems used in 2012 (and thus, what can we conclude from this paper since it is straightforward to improve a very poor system), or this need clarification. Anyway, baseline exists and should be mentioned. A minor comment about the last paragraph of the introduction. The paper (Hai Son Le et al. at EMNLP2010) addressed the issue of the initialization of word embeddings and this seems to perform quite well especially for rare words.
8yYIVxPr6xHht
Deep learning for class-generic object detection
[ "Brody Huval", "Adam Coates", "Andrew Ng" ]
We investigate the use of deep neural networks for the task of class-generic object detection. We show that neural networks originally designed for image recognition can be trained to detect objects within images, regardless of their class, including objects for which no bounding box labels have been provided. In addition, we show that bounding box labels yield a 1% performance increase on the ImageNet recognition challenge.
[ "object detection", "objects", "deep learning", "use", "deep neural networks", "task", "neural networks", "image recognition", "images", "class" ]
https://openreview.net/pdf?id=8yYIVxPr6xHht
https://openreview.net/forum?id=8yYIVxPr6xHht
Wb6A9kK0nsWYL
review
1,392,165,720,000
8yYIVxPr6xHht
[ "everyone" ]
[ "anonymous reviewer 22fb" ]
ICLR.cc/2014/workshop
2014
title: review of Deep learning for class-generic object detection review: This is an interesting paper that shows that using a pretrained object classification network's weights to initialize an object localization network leads to improved performance of the latter. Of course the comparisons are made for a fixed (but unspecified) architecture. It seems that the architecture chosen is that of an object classifier, and it is not clear at all that this is a good architecture for an object localizer. (In particular pooling is sensible for the former but makes less sense for the latter). Thus it's not really clear that the gains are that useful - perhaps it would just be better to design a network for the task in hand. That is not addressed in this paper at all. The writing itself is clear, but there are a several significant omissions in the paper. Table 1 caption is unclear, and seems broken. Section 4 'multiple GPUs' could be elucidated a little more? You say the output for your bounding box training is discretized, but don't say how. It's not even clear if you output one-hot in the cross-product space or discretize each dimension separately. While your main result is a relative improvement, this is still a fundamental omission that must be rectified. It's hard to interpret your results without knowing the resolution or the scoring system that you use. You don't explain the parameters of the Gaussian used for the labels, nor what you mean by 'multiple bounding boxes' - You've not described that possibility. Are 7% of objects labelled with bounding boxes, or only 7% of images? Where you have multiple objects in an image are you guaranteed to have the bounding boxes for all of them if you have any of them? (I presume not, since 'what is an object' is ambiguous - but you don't discuss this ambiguity). Figure 1 legend says 'without bounding boxes' but I presume you mean 'excluding the held-out set' - you train with the other 900 classes' bounding boxes? You don't discuss the details of training at all, (not even the number of nodes in the hidden layers or convolution windows, never mind learning rates etc) nor make it clear just how much is the same as the other references (Coates / Krizhevsky) you cite. ('Similar to' Krizhevsky is all you say) In particular you need to explain how you transition from pretraining to 'training' - initialization of the softmax weights is the same as starting from scratch? What about: Learning rate / momentum & their schedules ? Bibliography is broken everywhere e.g. with commas separating first and last names as well as authors' names from each other. First reference has no source. Junk like this: 'In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. .... 2009' The first page indicates that this paper is in Proc ICML and in JMLR. I presume this is just a LaTeX oversight?
8yYIVxPr6xHht
Deep learning for class-generic object detection
[ "Brody Huval", "Adam Coates", "Andrew Ng" ]
We investigate the use of deep neural networks for the task of class-generic object detection. We show that neural networks originally designed for image recognition can be trained to detect objects within images, regardless of their class, including objects for which no bounding box labels have been provided. In addition, we show that bounding box labels yield a 1% performance increase on the ImageNet recognition challenge.
[ "object detection", "objects", "deep learning", "use", "deep neural networks", "task", "neural networks", "image recognition", "images", "class" ]
https://openreview.net/pdf?id=8yYIVxPr6xHht
https://openreview.net/forum?id=8yYIVxPr6xHht
ZZw4ZaGzdyt0c
review
1,390,860,900,000
8yYIVxPr6xHht
[ "everyone" ]
[ "anonymous reviewer 7d29" ]
ICLR.cc/2014/workshop
2014
title: review of Deep learning for class-generic object detection review: This abstract investigates the idea of learning an object detection model that does not depend on the class, hence being able to generalize to any number of classes, including classes unknown at training time. The idea is compelling but the paper is short on details and results. We don't know how many bounding boxes are used in the softmax, we don't have the details about the Gaussian used to smooth the targets of the softmax (how picky is it?); The paper does not compare to similar approaches (like Szegedy et al 2013). I feel like this is an interesting idea relevant for the workshop track and hope it will be improved later with more details.
VNYDEas7tlE75
Occupancy Detection in Vehicles Using Fisher Vector Image Representation
[ "Yusuf Artan", "Peter Paul" ]
Due to the high volume of traffic on modern roadways, transportation agencies have proposed High Occupancy Vehicle (HOV) lanes and High Occupancy Tolling (HOT) lanes to promote car pooling. However, enforcement of the rules of these lanes is currently performed by roadside enforcement officers using visual observation. Manual roadside enforcement is known to be inefficient, costly, potentially dangerous, and ultimately ineffective. Violation rates up to 50%-80% have been reported, while manual enforcement rates of less than 10% are typical. Therefore, there is a need for automated vehicle occupancy detection to support HOV/HOT lane enforcement. A key component of determining vehicle occupancy is to determine whether or not the vehicle's front passenger seat is occupied. In this paper, we examine two methods of determining vehicle front seat occupancy using a near infrared (NIR) camera system pointed at the vehicle's front windshield. The first method examines a state-of-the-art deformable part model (DPM) based face detection system that is robust to facial pose. The second method examines state-of- the-art local aggregation based image classification using bag-of-visual-words (BOW) and Fisher vectors (FV). A dataset of 3000 images was collected on a public roadway and is used to perform the comparison. From these experiments it is clear that the image classification approach is superior for this problem.
[ "vehicles", "lanes", "vehicle", "image classification", "occupancy detection", "high volume", "traffic", "modern roadways" ]
https://openreview.net/pdf?id=VNYDEas7tlE75
https://openreview.net/forum?id=VNYDEas7tlE75
Q59R5LrpPySae
review
1,391,479,860,000
VNYDEas7tlE75
[ "everyone" ]
[ "anonymous reviewer fca6" ]
ICLR.cc/2014/workshop
2014
title: review of Occupancy Detection in Vehicles Using Fisher Vector Image Representation review: This paper addresses the problem of detecting people in the front passenger seat of a car as a step along the way to help enforce carpooling rules. The paper presents experiments comparing different approaches. In particular the paper explores solving the problem through using: a) Zhu and Ramanan's deformable part model for face detection, and using the detections for the final result, b) the use of Fisher Vectors in the sense of Jaakkola and Haussler, (1998), c) a variant of the widely used bag of visual words (BoW) representation, and d) a technique referred to as Vectors of Locally Aggregated Descriptors (VLAD). Techniques b), c) and d) use a traditional SVM approach for the final classification. The Fisher vector technique appears to have better performance compared to the other methods explored in this paper. The paper looks at a concrete practical application and explores ways for solving the problem that are fairly in line with current practices in computer vision. The face detection based comparison is certainly important and interesting. However, it is not clear from the current manuscript exactly how the model of Zhu and Ramanan was trained for the experiments here. Was the face detector trained on the training set defined here? Or, was the face detector used with the pre-learned parameters as distributed on their website? The performance could be dramatically different depending on how the method was trained. The size of the test train splits for the other experiments are also not given and the issue of SVM hyper-parameter tuning is not discussed. The Fisher feature vector technique presented here could be viewed as a form of representation learning, but this is really more of a vision application and method comparison paper as opposed to a paper deeply exploring aspects of representation learning. The paper also has some language problems.
VNYDEas7tlE75
Occupancy Detection in Vehicles Using Fisher Vector Image Representation
[ "Yusuf Artan", "Peter Paul" ]
Due to the high volume of traffic on modern roadways, transportation agencies have proposed High Occupancy Vehicle (HOV) lanes and High Occupancy Tolling (HOT) lanes to promote car pooling. However, enforcement of the rules of these lanes is currently performed by roadside enforcement officers using visual observation. Manual roadside enforcement is known to be inefficient, costly, potentially dangerous, and ultimately ineffective. Violation rates up to 50%-80% have been reported, while manual enforcement rates of less than 10% are typical. Therefore, there is a need for automated vehicle occupancy detection to support HOV/HOT lane enforcement. A key component of determining vehicle occupancy is to determine whether or not the vehicle's front passenger seat is occupied. In this paper, we examine two methods of determining vehicle front seat occupancy using a near infrared (NIR) camera system pointed at the vehicle's front windshield. The first method examines a state-of-the-art deformable part model (DPM) based face detection system that is robust to facial pose. The second method examines state-of- the-art local aggregation based image classification using bag-of-visual-words (BOW) and Fisher vectors (FV). A dataset of 3000 images was collected on a public roadway and is used to perform the comparison. From these experiments it is clear that the image classification approach is superior for this problem.
[ "vehicles", "lanes", "vehicle", "image classification", "occupancy detection", "high volume", "traffic", "modern roadways" ]
https://openreview.net/pdf?id=VNYDEas7tlE75
https://openreview.net/forum?id=VNYDEas7tlE75
AAN9jhVxFznx6
review
1,391,833,620,000
VNYDEas7tlE75
[ "everyone" ]
[ "anonymous reviewer bde7" ]
ICLR.cc/2014/workshop
2014
title: review of Occupancy Detection in Vehicles Using Fisher Vector Image Representation review: This paper is about classifying the presence or absence of a person on the front seat of a car. The main point of the paper is to compare the approach of using face detection and directly classifying the front seat image. They improve accuracy from 92% to 96% with full image classification. It also compares different aggregation methods on top of hand-designed features like SIFT: bag of words, fisher vector or VLAD, and shows better results using Fisher vectors. Other work has also been using more than just face detection but not the entire window itself. The novelty of this work is very limited and is mostly in the trivial idea of using the entire passenger image for classification. Pros: - improving accuracy to 96% over face approach (92%). Cons: - no novelty. - no representation learning.
H1Hp-q2s
Making a Case for Learning Motion Representations with Phase
[ "S. L. Pintea", "J. C. van Gemert" ]
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
[ "phase", "case", "motion representations", "eulerian motion", "static images", "motion transfer", "work", "image" ]
https://openreview.net/pdf?id=H1Hp-q2s
https://openreview.net/forum?id=H1Hp-q2s
ByKiOO-n
review
1,473,509,537,679
H1Hp-q2s
[ "everyone" ]
[ "~Pascal_Mettes1" ]
ECCV2016.org/BNMW
2016
title: Review rating: 9: Top 15% of accepted papers, strong accept review: --- Paper summary: --- This work proposes to shift the focus of temporal feature extraction in videos from a Lagrangian to a Eulerian perspective. Currently, optical flow method rule temporal feature extraction (e.g. two-stream networks, dense trajectories). However, rather than tracking pixels over time, it is also possible to examine the temporal behaviour of fixed locations over time. This work highlights four cases where the Eulerian perspective might be beneficial. --- Paper strengths: --- + The case for phase-based representations over optical flow based approaches is very interesting and fits the workshop well. It goes against the current norm, while there are many benefits. I expect a lot of fruitful discussion from this work at the workshop. + The authors already show how phase information can be extraced within (two-stream) deep networks, although this is currently a proof of concept. The proposed network can serve as a basis for other researchers working on this topic. Similarly, the motion transfer proof of concept is encouraging. Some possible points of discussion: ++ The paper argues that optical flow is handcrafted, but the proposed initial layer of the phase-based network is also not learned. Is it possible to integrate the properties of the complex steerable filters into the CNNs network itself, rather than clicking it on top of an existing network? ++ Alternatively, can a phase-based approach be implemented as a variant of the paper: "Structured Receptive Fields in CNNs", Jacobsen et al. CVPR 2016? Or stack multiple layers of learned bases of steerable filters? ++ What do the authors think about making a Eulerian variant of dense trajectories? ++ Can phase-based steerable filters be integrated with LSTMs for variable length videos? ++ I agree that phase is vital for motion, but is amplitude also not very useful information when discriminating actions? E.g. walking versus running. --- Paper weaknesses: --- - Not all cases made in the paper are equally important. The use in action recognition is evident, but the fourth case (motion transfer in videos) sounds more like a gimmick than a core case. - In the current setup, the move to a Eulerian perspective is a shot in the dark. Although the reasoning is sound, current networks such as two-stream networks are already highly optimized. As such, I fear that initial limited results might demotivate people to change perspectives, while I feel that it is a topic that deserves further research. confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
H1Hp-q2s
Making a Case for Learning Motion Representations with Phase
[ "S. L. Pintea", "J. C. van Gemert" ]
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
[ "phase", "case", "motion representations", "eulerian motion", "static images", "motion transfer", "work", "image" ]
https://openreview.net/pdf?id=H1Hp-q2s
https://openreview.net/forum?id=H1Hp-q2s
Hk3ouWkn
review
1,473,349,796,715
H1Hp-q2s
[ "everyone" ]
[ "~Amir_Ghodrati1" ]
ECCV2016.org/BNMW
2016
title: Review of the paper rating: 9: Top 15% of accepted papers, strong accept review: The paper proposes a new motion representation using steerable pyramids. To this end, they apply a set of oriented filters to decompose an image to phase and amplitude. It is shown that the phase has direct correlation to local motions in the frame. The general idea is nice. I have some comments: - The explanation for computing the phase and its relation to global motion is nice. However, I expected more formal way for expressing it. Also, some explanations seem not to be sufficient. For example what are exactly the oriented filters that are used? Are they trainable parameters? - The tasks are well organized and the planned experiments are clear. - I could not find any reason why authors ignore information provided by amplitude even though it is not correspond to motion. - The third task i.e. motion transfer in images is very interesting proposed task particularly when I saw the demo that authors are provided. However, I would expect a more strong proof of concept for the proposed motion representation. It would be nice to compare the provided preliminary results with optical flow as well. Also, as mentioned, aligning of the object in image to the video frames is a critical necessity. confidence: 3: The reviewer is fairly confident that the evaluation is correct
Hk8FG6Vi
Human Pose Estimation in Space and Time using 3D CNN
[ "Agne Grinciunaite", "Amogh Gudi", "Emrah Tasli", "Marten den Uyl" ]
This paper explores the capabilities of convolutional neural networks to deal with a task that is easily manageable for humans: perceiving 3D pose of a human body from varying angles. However, in our approach, we are restricted to using a monocular vision system. For this purpose, we apply the convolutional neural networks approach on RGB videos and extend it to three dimensional convolutions. This is done via encoding the time dimension in videos as the 3rd dimension in convolutional space, and directly regressing to human body joint positions in 3D coordinate space. This research shows the ability of such a network to achieve state-of-the-art performance on the selected Human3.6M dataset, thus demonstrating the possibility of successfully representing a temporal data with an additional dimension in the convolutional operation.
[ "space", "time", "convolutional neural networks", "human pose estimation", "cnn", "capabilities", "task", "manageable", "humans" ]
https://openreview.net/pdf?id=Hk8FG6Vi
https://openreview.net/forum?id=Hk8FG6Vi
Hk7Bkyas
review
1,473,208,123,208
Hk8FG6Vi
[ "everyone" ]
[ "(anonymous)" ]
ECCV2016.org/BNMW
2016
title: Review for Human Pose Estimation in Space and Time using 3D CNN rating: 7: Good paper, accept review: Paper propose to use 3D convolutions to the pose estimation problem from video data. Interestingly, paper use Human3.6M dataset. Interesting results are obtained for some pose categories. This work shows some potential. Perhaps it lacks bit of novelty and technicality. Paper is well written with good implementation details. Overall a good paper for a workshop presentation. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Hk8FG6Vi
Human Pose Estimation in Space and Time using 3D CNN
[ "Agne Grinciunaite", "Amogh Gudi", "Emrah Tasli", "Marten den Uyl" ]
This paper explores the capabilities of convolutional neural networks to deal with a task that is easily manageable for humans: perceiving 3D pose of a human body from varying angles. However, in our approach, we are restricted to using a monocular vision system. For this purpose, we apply the convolutional neural networks approach on RGB videos and extend it to three dimensional convolutions. This is done via encoding the time dimension in videos as the 3rd dimension in convolutional space, and directly regressing to human body joint positions in 3D coordinate space. This research shows the ability of such a network to achieve state-of-the-art performance on the selected Human3.6M dataset, thus demonstrating the possibility of successfully representing a temporal data with an additional dimension in the convolutional operation.
[ "space", "time", "convolutional neural networks", "human pose estimation", "cnn", "capabilities", "task", "manageable", "humans" ]
https://openreview.net/pdf?id=Hk8FG6Vi
https://openreview.net/forum?id=Hk8FG6Vi
HyM3Q-gn
review
1,473,414,058,197
Hk8FG6Vi
[ "everyone" ]
[ "~Silvia_Laura_Pintea1" ]
ECCV2016.org/BNMW
2016
title: Human Pose Estimation in Space and Time using 3D CNN rating: 7: Good paper, accept review: 1. Paper and Review Summary. The paper proposes estimating 3D body poses using 3D CNN. The claim of the paper is that the temporal dimension, in the video frames, helps discriminate the 3D body-joint locations. ------------------- Positive Points: ------------------- + The idea of mapping from the temporal video aspect to the 3D body joints seems rather nice. + The work achieves promising results in practice. ------------------- Negative Points: ------------------- - The paper presentation and writing needs thorough revising. - Better clarification of the paper contribution when compared to existing work. 2. Paper strengths. The paper proposes the use of temporal convolutions to learn 3D body joints. The proposed approach seems to be promising in practice. 3. Paper suggested improvements. The paper exposition could greatly benefit from rewriting and revising the text. Attention must be specifically paid to missing/wrong prepositions. The novelty of the paper should be better emphasized in introduction and related work. The claim that using the temporal dimension of the video helps discriminate between different body joints and helps in better localizing them over time, seems interesting. Could you see a connection with body-joint trajectory prediction or plausible future position prediction? The paper would benefit from a more clear comparison between the proposed method and other deep nets methods for body pose estimation. Maybe split related work into: (i) body pose prediction using temporal information [e.g. J J Tompson, et. al, NIPS, 2014], (ii) body pose prediction from 2D information, (iii) body pose prediction from 3D (depth) data. In the experimental evaluation, defining the used MPJEPE performance measure would be welcomed. In the section 5 you mention: "the model performs worse on the actions where people are sitting". Do you have insight as to why is this the case? Can it be that the input temporal dimension (5 video frames only) is too short to capture the position of occluded joints? A possible experiment would be to vary the temporal window; an alternative is using an RNN in combination with your 3D net to capture longer-term trajectories of the joints. Additionally, you mention "freely moving upper body joints like hands were poorly predicted". A possible approach would be to consider a part-based 3D network --- you could use multiple streams and enforce that different streams focus on different areas (hands, legs, torso) of the body using additional per-stream losses, while still predicting as an end-goal the complete body pose. ------------------- Detailed Comments: ------------------- - Specific attention must be paid to missing/extra prepositions and determiners: "we apply the convolutional neural networks approach", maybe "a convolutional neural network approach"; "successfully representing a temporal data", without "a"; "needs to account during training", use "account for"; "Cost function to be minimized [..] was", use "The cost function.."; "the two most recent work", use "works", etc. - "The proposed network design in this study has been developed as part of a master thesis work.", irrelevant. - The conclusion in section 2 is not convincing: "this work explores the effects of representing the temporal dimension in data as an additional spatial dimension in convolutions.", this is not the contribution of this work, this is what 3D networks do. Highlight the actual contribution. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Hk8FG6Vi
Human Pose Estimation in Space and Time using 3D CNN
[ "Agne Grinciunaite", "Amogh Gudi", "Emrah Tasli", "Marten den Uyl" ]
This paper explores the capabilities of convolutional neural networks to deal with a task that is easily manageable for humans: perceiving 3D pose of a human body from varying angles. However, in our approach, we are restricted to using a monocular vision system. For this purpose, we apply the convolutional neural networks approach on RGB videos and extend it to three dimensional convolutions. This is done via encoding the time dimension in videos as the 3rd dimension in convolutional space, and directly regressing to human body joint positions in 3D coordinate space. This research shows the ability of such a network to achieve state-of-the-art performance on the selected Human3.6M dataset, thus demonstrating the possibility of successfully representing a temporal data with an additional dimension in the convolutional operation.
[ "space", "time", "convolutional neural networks", "human pose estimation", "cnn", "capabilities", "task", "manageable", "humans" ]
https://openreview.net/pdf?id=Hk8FG6Vi
https://openreview.net/forum?id=Hk8FG6Vi
ry6MhiTo
review
1,473,260,565,207
Hk8FG6Vi
[ "everyone" ]
[ "(anonymous)" ]
ECCV2016.org/BNMW
2016
rating: 7: Good paper, accept review: This paper proposes 3D-CNN, a method to estimate the 3D locations (in space) of body joints from 2D image sequences composed of 5 frames. This is achieved by performing convolutions along 3 dimensions (2D space + time). The content of the paper is clear and easy to follow. Even thought the proposed method has a limited technical contribution, experiments on the Human3.6M dataset show that it is able to reach competitive results when estimating the 3D location of body joints. Furthermore, the experiments revealed some weaknesses of the proposed 3D-CNN method when handling occlusions and the joints from freely moving body parts. The problem of human pose estimation is of interest for real time applications, eg. gesture and action recognition. In case the paper is accepted for publication, I encourage the authors to discuss on the computation/processing times of the proposed method. This might strengthen the potential of the proposed method. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Hy9PzwEo
Efficient Two-Stream Motion and Appearance 3D CNNs for Video Classification
[ "Ali Diba", "Ali Mohammad Pazandeh", "Luc Van Gool" ]
The video and action classification have extremely evolved by deep neural networks specially with two stream CNN using RGB and optical flow as inputs and they present outstanding performance in terms of video analysis. One of the shortcoming of these methods is handling motion information extraction which is done out side of the CNNs and relatively time consuming also on GPUs. So proposing end-to-end methods which are exploring to learn motion representation, like 3D-CNN can achieve faster and accurate performance. We present some novel deep CNNs using 3D architecture to model actions and motion representation in an efficient way to be accurate and also as fast as real-time. Our new networks learn distinctive models to combine deep motion features into appearance model via learning optical flow features inside the network.
[ "cnns", "motion", "appearance", "video classification", "methods", "motion representation", "efficient", "video", "action classification", "deep neural networks" ]
https://openreview.net/pdf?id=Hy9PzwEo
https://openreview.net/forum?id=Hy9PzwEo
HyxalxAj
review
1,473,278,136,311
Hy9PzwEo
[ "everyone" ]
[ "~Jan_C_van_Gemert1" ]
ECCV2016.org/BNMW
2016
title: Review rating: 8: Top 50% of accepted papers, clear accept review: 1. Paper Summary. The paper proposes to add extra supervision in the form of optical flow to end-to-end action recognition. This removes the dependency of pre-computing hand-crafted optical flow as in the original two-stream network of [6]. The paper offers a 3D convolutional network for both the motion and the appearance stream. Results on UCF101 show good performance while obtaining a huge speedup due to not requiring optical flow pre-computation. 2. Paper Strengths. - Elegant and simple - Good results - Good speed 3. Paper Weaknesses. - Clarity and writing can be improved - Hand-crafted optical flow is still needed for training (but this is not to be avoided in this kind of setup) 4. Preliminary Rating. Oral/Poster 5. Preliminary Evaluation. Detailed comments: Citation: I missed "Flownet: Learning Optical Flow with Convolutional Net- works" by Fischer et al., ICCV 2015 in the references. Spelling and grammar: 'out side' = 'outside' 'using 3d architecture' = 'using a 3d architecture' 'and also as fast as real time' = 'and real time' 'exclude action recognition ... between .. features based methods' = sentence is not clear. 'fallowing' = 'following. 'the common datasets' = 'common datasets' 'As usually action recognition datasets contain videos' = what is the point of this sentence? 'To computer optical' = 'to compute optical' Clarity: Intro, 'having not enough number of videos'. The sports1M dataset contains 1 million videos. Is the number of videos really the culprit? Fig 1: Why is the SVM used? Why not train end-to-end? Is the 3D convolution on the appearance needed? Without 3D it becomes possible to pre-train on static datasets such as Imagenet. For end-to-end learning of optical flow I missed a reference to "flownet" which uses a very simple 3D convolution of only 2 frames. How good is the quality of the optical flow? Would it be possible to give some sense of its quality? (End-point-error numbers?) Would it be possible to learn optical flow from the same filters as used for the class score (fig 3)? Ie: what is the effect of sharing filters? confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
Hy9PzwEo
Efficient Two-Stream Motion and Appearance 3D CNNs for Video Classification
[ "Ali Diba", "Ali Mohammad Pazandeh", "Luc Van Gool" ]
The video and action classification have extremely evolved by deep neural networks specially with two stream CNN using RGB and optical flow as inputs and they present outstanding performance in terms of video analysis. One of the shortcoming of these methods is handling motion information extraction which is done out side of the CNNs and relatively time consuming also on GPUs. So proposing end-to-end methods which are exploring to learn motion representation, like 3D-CNN can achieve faster and accurate performance. We present some novel deep CNNs using 3D architecture to model actions and motion representation in an efficient way to be accurate and also as fast as real-time. Our new networks learn distinctive models to combine deep motion features into appearance model via learning optical flow features inside the network.
[ "cnns", "motion", "appearance", "video classification", "methods", "motion representation", "efficient", "video", "action classification", "deep neural networks" ]
https://openreview.net/pdf?id=Hy9PzwEo
https://openreview.net/forum?id=Hy9PzwEo
HJmSqBIs
review
1,472,776,763,415
Hy9PzwEo
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: Review for Efficient Two-Stream Motion and Appearance 3D CNNs for Video Classification rating: 10: Top 5% of accepted papers, seminal paper review: This paper proposes a new CNN architecture to learn two-stream 3D convolutional network for action recognition. Paper point out that disadvantages of extracting optical flow from an external algorithm and then fed it to the two-stream nets as mostly done in the literature. Paper claims the 3CD methods are faster than two stream methods. It is not clear to me the method explained in section 3.1. One straightforward solution to this problem is to use a network that is able to predict the optical-flow in an unsupervised manner and then take the output of that network and feed to a two stream network. The second method proposed in this paper is interesting, but still need to use optical flow obtained from Brox method to train the network. So why not use already computed optical flow for two-stream learning. I think, to gain the best advantage of proposed method, one needs to consider a strategy to estimate the optical-flow in an unsupervised manner. In this regard, motion information-based (dynamics) method such as Dynamic Images [1] is more interesting to apply in a 3CD network as it does not rely on any external inputs. There are couple of missing references [???] in the text in the first paragraph (due to latex compiling issues). Paper needs a bit of proof regarding. [1] Dynamic Image Networks for Action Recognition Hakan Bilen, Basura Fernando, Efstratios Gavves, Andrea Vedaldi and Stephen Gould CVPR 2016 confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S1tBNiXj
Motion Representation with Acceleration Images
[ "Hirokatsu Kataoka", "Yun He", "Soma Shirakabe", "Yutaka Satoh" ]
Information of time differentiation is extremely important cue for a motion representation. We have applied first-order differential velocity from a positional information, moreover we believe that second-order differential acceleration is also a significant feature in a motion representation. However, an acceleration image based on a typical optical flow includes motion noises. We have not employed the acceleration image because the noises are too strong to catch an effective motion feature in an image sequence. On one hand, the recent convolutional neural networks (CNN) are robust against input noises. In this paper, we employ acceleration-stream in addition to the spatial- and temporal-stream based on the two-stream CNN. We clearly show the effectiveness of adding the acceleration stream to the two-stream CNN.
[ "motion representation", "cnn", "acceleration image", "acceleration images information", "time differentiation", "important cue", "differential velocity", "positional information", "differential acceleration" ]
https://openreview.net/pdf?id=S1tBNiXj
https://openreview.net/forum?id=S1tBNiXj
Bkl12tBi
review
1,472,728,024,187
S1tBNiXj
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: Review for Motion Representation with Acceleration Images rating: 10: Top 5% of accepted papers, seminal paper review: This paper propose to exploit higher order motion patterns using optical flow. Infact, paper exploits a new stream of information called acceleration stream which is kind of applying optical flow algorithms over optical flow images. In a sense accelerated image is an hierarchical extension of optical-flow. Paper compare the effectiveness of the method with two-stream networks. Couple of suggestions and concerns: The amount of information in acceleration image is quite sparse and probably non-smooth. Does it make sense to consider the time-varying-mean representation of accelerated image similar to [REF1]. Also, I think the the main idea of the paper is conceptually, similar to hierarchical temporal pattern encoding methods such as [REF2]. It would be good to evaluate the effect of this method in Hollywood2 dataset as well. [REF1] Rank Pooling for Action Recognition, Basura Fernando, Efstratios Gavves, Jose Oramas, Amir Ghodrati and Tinne Tuytelaars, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) [REF2] Discriminative Hierarchical Rank Pooling for Activity Recognition Basura Fernando, Peter Anderson, Marcus Hutter and Stephen Gould CVPR 2016 confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S1tBNiXj
Motion Representation with Acceleration Images
[ "Hirokatsu Kataoka", "Yun He", "Soma Shirakabe", "Yutaka Satoh" ]
Information of time differentiation is extremely important cue for a motion representation. We have applied first-order differential velocity from a positional information, moreover we believe that second-order differential acceleration is also a significant feature in a motion representation. However, an acceleration image based on a typical optical flow includes motion noises. We have not employed the acceleration image because the noises are too strong to catch an effective motion feature in an image sequence. On one hand, the recent convolutional neural networks (CNN) are robust against input noises. In this paper, we employ acceleration-stream in addition to the spatial- and temporal-stream based on the two-stream CNN. We clearly show the effectiveness of adding the acceleration stream to the two-stream CNN.
[ "motion representation", "cnn", "acceleration image", "acceleration images information", "time differentiation", "important cue", "differential velocity", "positional information", "differential acceleration" ]
https://openreview.net/pdf?id=S1tBNiXj
https://openreview.net/forum?id=S1tBNiXj
ry2llgxA
review
1,475,506,163,815
S1tBNiXj
[ "everyone" ]
[ "~Amogh_Gudi1" ]
ECCV2016.org/BNMW
2016
title: YORO (You Only Review Once): Motion Representation with Acceleration Images rating: 7: Good paper, accept review: The paper explores the use of the second order derivative of an image steam, aka "acceleration stream", for the task of action recognition. This higher order motion information is obtained by applying partial derivatives over optical flow/dense trajectories data obtained from the input image stream. The task of action recognition is performed by using multi-stream CNN, which is essentially an ensemble of multiple independently trained CNNs whose outputs are merged using late-fusion (weighted-averaging). Merits: + The approach of providing multiple higher order derivatives of the temporal image data to a deep network is quite interesting. + The approach is very simple and easy to understand. The work seems reproducible. + Experiments for comparison between all combinations of streams are minimally thorough. Critique: - In section 2, the authors write about state-of-the-art on standard benchmarks like UCF101, HMDB51. However, they do not test their approach on these. - In equation 5, the values of alpha and beta weights are fixed at 2.0. Spatial stream is weighted lower than the temporal and acceleration streams, even though spatial stream by itself performs better than the other two (Table 1). An explanation about this would be appreciated. - "Surprisingly, the acceleration stream performs better than the temporal stream" in section 4: Not much explanation is give about this. Perhaps, a figure showing examples classified correctly and wrongly by the acceleration and temporal streams respectively can be very helpful. - In Table 1, addition of S+A and T+A stream combination performance rates could be informative (and complete). - "TDDs have achieved the highest performance in several benchmarks, such as UCF101 (91.5%) and HMDB51 (65.9%)": text appears repeated in sections 1 and 2. - Image captions can be more descriptive in general. - The overall writing style of the paper can be improved (more academic, better sentence framing in English, grammar). confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
SyVL5tEi
Segmentation Free Object Discovery in Video
[ "Giovanni Cuffaro", "Federico Becattini", "Claudio Baecchi", "Lorenzo Seidenari", "Alberto Del Bimbo" ]
In this paper we present a simple yet effective approach to extend without supervision any object proposal from static images to videos. Unlike previous methods, these spatio-temporal proposals, to which we refer as tracks, are generated relying on little or no visual content by only exploiting bounding boxes spatial correlations through time. The tracks that we obtain are likely to represent objects and are a general-purpose tool to represent meaningful video content for a wide variety of tasks. For unannotated videos, tracks can be used to discover content without any supervision. As further contribution we also propose a novel and dataset-independent method to evaluate a generic object proposal based on the entropy of a classifier output response. We experiment on two competitive datasets, namely YouTube Objects and ILSVRC-2015 VID.
[ "tracks", "supervision", "video", "simple", "effective", "object proposal", "static images", "videos" ]
https://openreview.net/pdf?id=SyVL5tEi
https://openreview.net/forum?id=SyVL5tEi
rJHK8HE2
review
1,473,693,309,559
SyVL5tEi
[ "everyone" ]
[ "~Zhenyang_Li1" ]
ECCV2016.org/BNMW
2016
title: Review for Segmentation Free Object Discovery in Video rating: 9: Top 15% of accepted papers, strong accept review: The paper presents a novel way of generating spatial-temporal object proposals in videos. In particular, the proposed method relies on object proposals in static image and links them through time into a "track" by matching with temporal consistency. This is simple but effective. Furthermore, the paper introduce a approach, named time to live counter (TTL), to determine when the track of proposals terminates and the missing frames caused by track fragmentation are linearly interpolated. In the end, the paper propose a way of ranking all the tracks in a video by considering their objectness and temporal consistency. In general the paper is very interesting and has obvious contribution. However, the reviewer has some questions that need clarification: 1) There are many related works on action proposals, such as "Action localization with tubelets from motion. Jain et. al. CVPR2014". What's the benefit of the paper compared with this kind of methods. If it's about efficiency, the paper should provide some evaluations about it. 2) The new evaluation method introduced in the paper is based on objectness entropy and is dataset-independent. However, the other important factor IoU Which also indicates the box quality is missing. confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
SyVL5tEi
Segmentation Free Object Discovery in Video
[ "Giovanni Cuffaro", "Federico Becattini", "Claudio Baecchi", "Lorenzo Seidenari", "Alberto Del Bimbo" ]
In this paper we present a simple yet effective approach to extend without supervision any object proposal from static images to videos. Unlike previous methods, these spatio-temporal proposals, to which we refer as tracks, are generated relying on little or no visual content by only exploiting bounding boxes spatial correlations through time. The tracks that we obtain are likely to represent objects and are a general-purpose tool to represent meaningful video content for a wide variety of tasks. For unannotated videos, tracks can be used to discover content without any supervision. As further contribution we also propose a novel and dataset-independent method to evaluate a generic object proposal based on the entropy of a classifier output response. We experiment on two competitive datasets, namely YouTube Objects and ILSVRC-2015 VID.
[ "tracks", "supervision", "video", "simple", "effective", "object proposal", "static images", "videos" ]
https://openreview.net/pdf?id=SyVL5tEi
https://openreview.net/forum?id=SyVL5tEi
H1GJcU8j
review
1,472,780,762,014
SyVL5tEi
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: Review for Segmentation Free Object Discovery in Video rating: 10: Top 5% of accepted papers, seminal paper review: Paper propose a temporal object proposal method extending traditional object proposals to videos. Paper call these generated temporal proposals tracks. For efficiency, paper first extract per frame bounding boxes from videos and exploit the temporal consistency to find tracks. Also paper propose a method to recover missing boxes using the concept of time to live counter. Temporal non-maxima suppression is used to get rid of redundancy issue. Finaly, tracks are ranked using object proposal score and the IoU scores. Proposed method is interesting and perhaps makes more sense to use such a technique in action recognition and modelling human object interactions as well. In particular, it would be interesting to apply this method on dataset such as Cooking activities to provide action proposal as well. Overall, a good paper. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
ryeujDzo
Human Action Recognition without Human
[ "Yun He", "Soma Shirakabe", "Yutaka Satoh", "Hirokatsu Kataoka" ]
The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.
[ "human action recognition", "human", "background sequence", "objective", "motion representation", "several sophisticated options", "dense trajectories", "convolutional neural network" ]
https://openreview.net/pdf?id=ryeujDzo
https://openreview.net/forum?id=ryeujDzo
BkHbu4Sj
review
1,472,706,556,886
ryeujDzo
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: Review for Human Action Recognition without Human rating: 10: Top 5% of accepted papers, seminal paper review: This paper analyse the impact of background in action recognition using UCF101 dataset. Paper is very interesting and find that the background is indeed have a huge impact on action recognition performance. Only using the background, paper manage to obtain 47.2 % on UCF101 dataset. Only using human regions, paper manage to obtain 56.91 which is only 9.5% better than the background only mode. Clearly, this is a very interesting finding and perhaps shows the limitations of UCF101 dataset. It would be good to analyse the same thing in other datasets such as HMDB51, ActivityNet. It would be interesting to perform a class based analysis on this task. Which classes benefit the most from the background and which classes does not rely on the background. Then one can build a UCF101 motion only dataset where actually one has to rely on good motion representations to do action recognition. This would be a very good contribution to the community. Another interesting aspect of this finding is to see if one can build a predictor for each action class to tell if one should need a motion-based approach or not to recognize the activity? Very interesting paper with a lot of potential future research. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
ryeujDzo
Human Action Recognition without Human
[ "Yun He", "Soma Shirakabe", "Yutaka Satoh", "Hirokatsu Kataoka" ]
The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.
[ "human action recognition", "human", "background sequence", "objective", "motion representation", "several sophisticated options", "dense trajectories", "convolutional neural network" ]
https://openreview.net/pdf?id=ryeujDzo
https://openreview.net/forum?id=ryeujDzo
ryTkQxk2
review
1,473,344,229,691
ryeujDzo
[ "everyone" ]
[ "(anonymous)" ]
ECCV2016.org/BNMW
2016
title: Review rating: 8: Top 50% of accepted papers, clear accept review: --- Paper summary: --- This work investigates the influence of background information on the recognition of actions in videos. While the foreground action is typically deemed most influential, there are common background elements present in videos representing the same action. Here, the authors propose to quantify the influence of background for actions. By blacking out central parts of the video frames, representations are created without the focus on the main actions. Experimental results show that although the central parts of the video are more influential than the background, there is significant information in background elements to recognize actions. --- Paper strengths: --- + The investigation is interesting and fits the workshop well. It is a topic that should encourage further discussion at the workshop itself. Is the information in the background inherently useful (like context in images)? Is it a limitation of the dataset? This paper provides a useful start for discussions. + Clear experimental setup, correct references to similar papers (e.g. Jain et al. CVPR 2015), and encouraging results. The results of this paper can be used to extend the scope of current feature extraction methods for action recognition. --- Paper weaknesses: --- - The experimental setup is clear, but makes a very strong assumption which I do not believe to be true. The authors assume that the action occurs within the central part of each frame (e.g. central and roughly half width, half height). This is hardly an ideal setup, as foreground actions can very easily occur outside that window. Look e.g. at Figure 4a (bottom). Clearly, a significant part of the foreground action is visible in the background part. As a result, it is not possible to conclude that background information is useful, as another conclusion (foreground actions also occur in background regions) is equally likely given the experimental setup. - The central assumption is also not required. For a subset of UCF101, bounding box information is given. An experiment on this subset with the bounding box areas blacked (and inverse), would be very interesting and more correct than the current setup. Perhaps another point of discussion at the workshop. confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
1WoA5w4ggSMnPB1oinmm
Signs in time: Encoding human motion as a temporal image
[ "Joon Son Chung", "Andrew Zisserman" ]
The goal of this work is to recognise and localise short temporal signals in image time series, where strong supervision is not available for training. To this end we propose an image encoding that concisely represents human motion in a video sequence in a form that is suitable for learning with a ConvNet. The encoding reduces the pose information from an image to a single column, dramatically diminishing the input requirements for the network, but retaining the essential information for recognition. The encoding is applied to the task of recognizing and localizing signed gestures in British Sign Language (BSL) videos. We demonstrate that using the proposed encoding, signs as short as 10 frames duration can be learnt from clips lasting hundreds of frames using only weak (clip level) supervision and with considerable label noise.
[ "human motion", "signs", "time", "image", "encoding", "temporal image signs", "temporal image", "goal", "work", "short temporal signals" ]
https://openreview.net/pdf?id=1WoA5w4ggSMnPB1oinmm
https://openreview.net/forum?id=1WoA5w4ggSMnPB1oinmm
HywQZe0o
review
1,473,278,239,483
1WoA5w4ggSMnPB1oinmm
[ "everyone" ]
[ "~Jan_C_van_Gemert1" ]
ECCV2016.org/BNMW
2016
title: Review comment: 1. Paper Summary. The paper presents a new representation useful for Sign Language word classification needing only supervision at the word level, not at the sign level. The method first detects 6 human pose key points and the representation is a heat map of these key points, including their derivative. The advantage of the heatmap is that it can be used in a convolution. In addition, localization can be performed. Results on a new and large dataset look promising. 2. Paper Strengths. + Simple and elegant representation + Encouraging classification results + Promising localization that come for free 3. Paper Weaknesses. - Limited to human sign language word classification - Unclear how existing representations would fare (3D convolutions) - Hard-coded border handling (mirroring) 4. Preliminary Rating. Poster 5. Preliminary Evaluation. Detailed comments: Novelty: Intro, "than can be used by a convnet": with current 3D convNets a video sequence can directly be used without converting the video to another representation. Why not use a 3D convNet? Clarity: Intro, "25 X 10". Where do these numbers come from? How do they relate to the 10 seconds of video? Is the 10 seconds the number 10? What role does framerate play? Scope: Intro, "Apply [...] to Sign Language". Can the representation be generalized to other tasks as well? Scope, Section 2. "the objectives are to determine if a target sequence is present", this is then not general sign language recognition? It is sign-language verification? It means that pre-trained sequences can be recognized? Clarity, Section 2, Can the advantage of using a heatmap over the raw x and y positions directly be made more clear? Clarity, Figure 1. Why is border mirroring hard-coded in the representation? This limits the size of the convolution kernel. Border effects should be handled by the convolution operator. Scope. Section 2, "Motion History Images", well, the background motion would not be a problem for motion history images if the same 6 human pose keypoints are used as in this submission. Thus, Motion History Images seem a baseline to compare against. Clarity. Section 2.1. Where does the number 330 come from? Clarity. Section 3. It would be nice if some example words can be given? Clarity. Section 5, Experiments. Are there only qualitative experiments? (an example video?). This is fine, but could be made more clear.
1WoA5w4ggSMnPB1oinmm
Signs in time: Encoding human motion as a temporal image
[ "Joon Son Chung", "Andrew Zisserman" ]
The goal of this work is to recognise and localise short temporal signals in image time series, where strong supervision is not available for training. To this end we propose an image encoding that concisely represents human motion in a video sequence in a form that is suitable for learning with a ConvNet. The encoding reduces the pose information from an image to a single column, dramatically diminishing the input requirements for the network, but retaining the essential information for recognition. The encoding is applied to the task of recognizing and localizing signed gestures in British Sign Language (BSL) videos. We demonstrate that using the proposed encoding, signs as short as 10 frames duration can be learnt from clips lasting hundreds of frames using only weak (clip level) supervision and with considerable label noise.
[ "human motion", "signs", "time", "image", "encoding", "temporal image signs", "temporal image", "goal", "work", "short temporal signals" ]
https://openreview.net/pdf?id=1WoA5w4ggSMnPB1oinmm
https://openreview.net/forum?id=1WoA5w4ggSMnPB1oinmm
SyO2pgHo
review
1,472,691,631,744
1WoA5w4ggSMnPB1oinmm
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: RPBC : Review for Signs in time: Encoding human motion as a temporal image comment: Paper propose a very interesting method for human motion classification and localisation in video sequences. In particular, paper focuses on sign language gesture recognition problem from TV broadcasts. The main contribution and the novelty of the paper is Kinetograms. Kindetogram is a very compact representation (or a summary). It captures the evolution of specific set of interesting key-points in a video sequence. Kinetograms keep the location of the key points such as head, right arm, left arm as a bitmap for each time step. Paper uses 330 time steps resulting an image of 10*330. Limitations and suggestions: It seems to me that this method can only encode fixed time steps. In this paper, authors used a time step of 330. The size of the image not only depends on the time step size, but also the number of key-points. It would be good to find a mechanism to obtain a fixed size representation for variable size problems. I am wondering that it makes sense to use fourier analysis-based methods to process Kinetograms. It would be interesting to use a simple technique such as normalized correlation matching to do the localization in temporal dimension or at least to propose such a strategy as a gesture proposal method similar to object proposals. I should say, this is a very interesting solution to a difficult problem.
1WoA5w4ggSMnPB1oinmm
Signs in time: Encoding human motion as a temporal image
[ "Joon Son Chung", "Andrew Zisserman" ]
The goal of this work is to recognise and localise short temporal signals in image time series, where strong supervision is not available for training. To this end we propose an image encoding that concisely represents human motion in a video sequence in a form that is suitable for learning with a ConvNet. The encoding reduces the pose information from an image to a single column, dramatically diminishing the input requirements for the network, but retaining the essential information for recognition. The encoding is applied to the task of recognizing and localizing signed gestures in British Sign Language (BSL) videos. We demonstrate that using the proposed encoding, signs as short as 10 frames duration can be learnt from clips lasting hundreds of frames using only weak (clip level) supervision and with considerable label noise.
[ "human motion", "signs", "time", "image", "encoding", "temporal image signs", "temporal image", "goal", "work", "short temporal signals" ]
https://openreview.net/pdf?id=1WoA5w4ggSMnPB1oinmm
https://openreview.net/forum?id=1WoA5w4ggSMnPB1oinmm
H1WfPI73
review
1,473,632,009,260
1WoA5w4ggSMnPB1oinmm
[ "everyone" ]
[ "~Dinesh_Jayaraman1" ]
ECCV2016.org/BNMW
2016
title: Keypoint motion "kinetogram" image representation + convnet for action localization comment: This paper deals with the problem of localizing short gestures in video. It proposes the innovative idea of compiling keypoint locations over time into a "kinetogram" image format, tailored for exploitation by convnets. Strengths: > novel idea of compiling temporal signs into an image to enable processing with convnets. > backprop to input pixels for precise temporal localization, while not entirely novel (authors rightly attribute to [11]), is still a neat and interesting choice. > interesting new dataset, compiled from weak labels > strong results on an interesting task Minor weaknesses, and other comments/suggestions: > not many baselines, but forgivable since the task is somewhat novel. > Exposition around kinetogram, the key idea of the paper, could be improved. Fig 1 is confusing, esp the mirroring of channels at borders, and the fact that the labels for 1 through 6 are not aligned with the channels. It is also not clear how there are 10 channels (sec 2.1): 3 channels each for head, right hand and left hand = 9 channels. > Treating the kinetogram as an image means the ordering of channels becomes important. This is a somewhat obvious point for investigation that is unexplored and unaddressed in the paper. > While the nature of the kinetogram input means that CNN architectural choices are a bit of a stab in the dark, some choices here such as using square convolutional kernels (when rows and columns correspond to parts and time instants respectively) seem likely to be suboptimal. > "Single" and "Multi" in Table 2 appear unexplained. > Also related to the Lea et al, "Temporal Convolution Networks" submission to the same workshop. This paper is definitely brave, has very interesting ideas, and will be a great fit for the workshop.
SJ6QPfKc
Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness
[ "Jason J. Yu, Adam W. Harley and Konstantinos G. Derpanis" ]
Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.
[ "optical flow", "basics", "unsupervised learning", "brightness constancy", "motion smoothness", "unsupervised", "groundtruth flow", "losses", "convolutional networks", "convnets" ]
https://openreview.net/pdf?id=SJ6QPfKc
https://openreview.net/forum?id=SJ6QPfKc
BJTqKVWC
review
1,475,590,549,238
SJ6QPfKc
[ "everyone" ]
[ "~Amogh_Gudi1" ]
ECCV2016.org/BNMW
2016
title: Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness: The Review rating: 9: Top 15% of accepted papers, strong accept review: The paper proposes a fully unsupervised approach towards training a convnet to perform the task of optical flow. This is achieved by using two loss metrics, a photometric loss (the difference between first image and the warped next image based on flow prediction), and a smoothness loss (difference between spatially neighbouring flow predictions). The paper uses the FlowNet architecture to achieve this. Merits: + Getting competitive/superior performance by a fully unsupervised approach v/s a supervised method is a strong result. + Generous technical details are provided. Work seems reproducible from the paper. + Overall, a well written paper with good sentence framing. The related works paragraphs seem thorough. Critique/Discussion: - The abstract concludes with "Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset". This might seem too strong a claim, considering this happens only during the AEE-NOC setting. Words like competitive results, etc. are perhaps better suited to make general claims (like in section 3.3 and 5). - No discussion is provided regarding the low performance on KITTI avg-all set v/s the high performance on avg-NOC set. Why, according to the authors, is this unsupervised approach affected more by occlusion, compared to the supervised approach? - Although comparison with supervised state-of-the-art [12] is done, comparison with a concurrent unsupervised approach [11] is missing. Such a comparison can be very insightful (although admittedly, [11] doesn't seem to test quantitatively on a standard flow dataset). confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
SJ6QPfKc
Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness
[ "Jason J. Yu, Adam W. Harley and Konstantinos G. Derpanis" ]
Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.
[ "optical flow", "basics", "unsupervised learning", "brightness constancy", "motion smoothness", "unsupervised", "groundtruth flow", "losses", "convolutional networks", "convnets" ]
https://openreview.net/pdf?id=SJ6QPfKc
https://openreview.net/forum?id=SJ6QPfKc
Byw7UbSs
review
1,472,693,791,384
SJ6QPfKc
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: RPBC : Review for Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness rating: 10: Top 5% of accepted papers, seminal paper review: This paper propose a method to predict the optical-flow from pair of frames using a CNN in an unsupervised manner. Paper does not use motion field ground truth from real scenes compared to other similar methods. Paper used a combination of two losses; one that measures photometric constancy, that is the difference between the first input image and the (inverse) warped subsequent image based on the predicted optical flow by the network. The smoothness loss measures the difference between spatially neighbouring flow predictions. This is an interesting idea, however the primary concept is also used in prior work such as [5]. I think even then this work is very interesting as it does not rely on supervision to provide the ground truth information for this specific problem. Some suggestions: I am wondering at the beginning of the training the prediction of the optical flow is quite noisy and without much of supervision, isn’t it a very hard learning problem for the CNN. The reason is the solution space is very large and at the beginning of the training there is not enough signal to learn what is optical flow. I wonder if authors have used specific strategy to circumvent this. Another suggestion is to use semi-supervised approach where both small portion of labelled data is provided along with massive unlabeled data. How this method can be extended to exploit small amount of labelled data? Perhaps authors can use a two branch CNN to do that. I am wondering if it is possible to use MRF/CRF based approach to handle the spatial smoothness term. Overall, this is very good work. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Sy9g8Lrj
Mining Spatial and Spatio-Temporal ROIs for Action Recognition
[ "Xiaochen Lian", "Zhuoyuan Chen", "Yi Yang", "Jiang Wang", "Alan Yuille" ]
In this paper, we propose an approach to classify action sequences. We observe that in action sequences the critical features for discriminating between actions occur only within sub-regions of the image. Hence deep network approaches will address the entire image are at a disadvantage. This motivates our strategy which uses static and spatio-temporal visual cues to isolate static and spatio-temporal regions of interest (ROIs). We then use weakly supervised learning to train deep network classifiers using the ROIs as input. More specifically, we combine multiple instance learning (MIL) with convolutional neural networks (CNNs) to select discriminative action cues. This yields classifiers for static images, using the static ROIs, as well as classifiers for short image sequences (16 frames), using spatio-temporal ROIs. Extensive experiments performed on the UCF101 and HMDB51 benchmarks show that both these types of classifiers perform well individually and achieve state of the art performance when combined together.
[ "rois", "classifiers", "action sequences", "static", "spatial", "action recognition", "critical features", "actions", "image" ]
https://openreview.net/pdf?id=Sy9g8Lrj
https://openreview.net/forum?id=Sy9g8Lrj
HJylzgCj
review
1,473,278,438,816
Sy9g8Lrj
[ "everyone" ]
[ "~Jan_C_van_Gemert1" ]
ECCV2016.org/BNMW
2016
title: Review rating: 7: Good paper, accept review: 1. Paper Summary. The paper proposes to base action recognition on local parts: bounding boxes for appearance and a tube for motion. The EdgeBoxes object proposals are used for static images. For the motion part, the paper does not rely on the methods available in the literature and makes its own tubes. Local parts are aggregated in a MIL (Multiple Instance Learning) framework. Results on UCF101 and HMDB51 are good. 2. Paper Strengths. + Locality in action recognition + Good results 3. Paper Weaknesses. - Paper is not aware of existing methods for video tubes. 4. Preliminary Rating. Poster 5. Preliminary Evaluation. Detailed comments: Clarity: Abstract. It is a good observation that actions are local. But, so are objects in static images. The claim that "deep network approaches will address the entire image are at a disadvantage" is a bit strange, since this does not seem to be a disadvantage for object recognition in static images. Literature: Introduction. "which we call video tubes". There is much related work on video/action tubes that is not cited. Examples include: [a]- (My own work) Van Gemert et al., BMVC 15 "APT: Action localization Proposals from dense Trajectories" [b]- (My own work) Jain et al. CVPR 14, "Action localization with tubelets from motion" [c]- Weinzaepfel et al.,ICCV 2015, "Learning to track for spatio-temporal action localization". [d]- D. Oneata, et al., ECCV 14. "Spatio-temporal object detection proposals" [e]- Yu and Yuan, CVPR15 "Fast action proposals for human action detection and search" [f]- G. Gkioxari and J. Malik. Finding action tubes. In CVPR, 2015 Spelling: "We combine ML" probably its meant: "We combine MIL" Clarity: Section 2.1, What is "the aggregation function g()" ? This function is not explained. Scope: Section 2.2, 'video tubes', I would recommend to use a standard method from missing refs [a,b,c,d,e,f] for video tubes. They forfill a similar role as edgeboxes in the static image does. confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
Sy9g8Lrj
Mining Spatial and Spatio-Temporal ROIs for Action Recognition
[ "Xiaochen Lian", "Zhuoyuan Chen", "Yi Yang", "Jiang Wang", "Alan Yuille" ]
In this paper, we propose an approach to classify action sequences. We observe that in action sequences the critical features for discriminating between actions occur only within sub-regions of the image. Hence deep network approaches will address the entire image are at a disadvantage. This motivates our strategy which uses static and spatio-temporal visual cues to isolate static and spatio-temporal regions of interest (ROIs). We then use weakly supervised learning to train deep network classifiers using the ROIs as input. More specifically, we combine multiple instance learning (MIL) with convolutional neural networks (CNNs) to select discriminative action cues. This yields classifiers for static images, using the static ROIs, as well as classifiers for short image sequences (16 frames), using spatio-temporal ROIs. Extensive experiments performed on the UCF101 and HMDB51 benchmarks show that both these types of classifiers perform well individually and achieve state of the art performance when combined together.
[ "rois", "classifiers", "action sequences", "static", "spatial", "action recognition", "critical features", "actions", "image" ]
https://openreview.net/pdf?id=Sy9g8Lrj
https://openreview.net/forum?id=Sy9g8Lrj
rysoeZTs
review
1,473,216,674,744
Sy9g8Lrj
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: Review for Mining Spatial and Spatio-Temporal ROIs for Action Recognition rating: 10: Top 5% of accepted papers, seminal paper review: Paper investigates the importance of ROI for action recognition in spatial domain and spatial-temporal domain. Spatial ROIs are constructed with Edge boxes. Similarly, spatio-temporal ROIs are created using motion boundary images and edge boxes. Then multiple instance learning is used to train two-stream CNN over action tubes and ROI pooled images. Idea of using MIL is interesting as it is hard to pre-determine which of the ROIs are discriminative. I think the overall idea of the paper is good. I have some comments: 1. May be authors can also use Optical-Flow to obtain spatial-temporal action tubes using similar techniques. 2. Paper should have presented experimental evidence to demonstrate the effectiveness of ROIs in an controlled experiment setting. May be use the same two stream architecture 1. one without ROI and 2. one with ROI only in temporal 3. one with RIO only in RGB 4. ROI in both and perform controlled experiments to see the impact of ROI. 3. Also evaluate the impact of multiple instance learning strategies. Obtained results are encouraging and interesting. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Sy9g8Lrj
Mining Spatial and Spatio-Temporal ROIs for Action Recognition
[ "Xiaochen Lian", "Zhuoyuan Chen", "Yi Yang", "Jiang Wang", "Alan Yuille" ]
In this paper, we propose an approach to classify action sequences. We observe that in action sequences the critical features for discriminating between actions occur only within sub-regions of the image. Hence deep network approaches will address the entire image are at a disadvantage. This motivates our strategy which uses static and spatio-temporal visual cues to isolate static and spatio-temporal regions of interest (ROIs). We then use weakly supervised learning to train deep network classifiers using the ROIs as input. More specifically, we combine multiple instance learning (MIL) with convolutional neural networks (CNNs) to select discriminative action cues. This yields classifiers for static images, using the static ROIs, as well as classifiers for short image sequences (16 frames), using spatio-temporal ROIs. Extensive experiments performed on the UCF101 and HMDB51 benchmarks show that both these types of classifiers perform well individually and achieve state of the art performance when combined together.
[ "rois", "classifiers", "action sequences", "static", "spatial", "action recognition", "critical features", "actions", "image" ]
https://openreview.net/pdf?id=Sy9g8Lrj
https://openreview.net/forum?id=Sy9g8Lrj
HyB9Rk1n
review
1,473,343,116,860
Sy9g8Lrj
[ "everyone" ]
[ "(anonymous)" ]
ECCV2016.org/BNMW
2016
title: Review for Mining Spatial and Spatio-Temporal ROIs for Action Recognition rating: 7: Good paper, accept review: The paper proposes a method for action recognition using CNN features and multiple instance learning. To this end, they use appearance and motion as two types of information in two separate streams. In each stream, the features are locally extracted and then are aggregated in deeper layers to predict action label. As a side product, they generate most discriminative bounding boxes for each video. The idea of MIL in combination with two-streams network is. I have some comments: - I would like to inform authors about the work of [a] G. Gkioxari and J. Malik, Finding Action Tubes, CVPR'15. The way the paper extracts video tubes are closely related to [a]. However, [a] evaluates their method in a supervised setting for action detection. - the Aggregation layer (ie. function g(.)) is not well defined. - a limitation of "video tube" is that its length should be equal to video length. It maybe useful to have variable-length video tubes that cover part of the time-axis. Also, while the network architecture needs exactly K video tubes, using the proposed method, there is no guarantee to have at least K tubes. - I could not find the way you combine the scores of two branches. Is it just a summation of action scores after softmax function? what about using an early-fusion strategy and combine them in the aggregation layer? - Why for second stream you are using tubes+3D convolution and not optical flow+2D convolution? It would be nice to have a justification/experiment for it. - It would be nice to evaluate the localization performance of your method and compare it to the supervised (or other semi-supervised) methods (for example in J-HMDB dataset). However, you need to find a way to combine boxes of each stream. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Sy9g8Lrj
Mining Spatial and Spatio-Temporal ROIs for Action Recognition
[ "Xiaochen Lian", "Zhuoyuan Chen", "Yi Yang", "Jiang Wang", "Alan Yuille" ]
In this paper, we propose an approach to classify action sequences. We observe that in action sequences the critical features for discriminating between actions occur only within sub-regions of the image. Hence deep network approaches will address the entire image are at a disadvantage. This motivates our strategy which uses static and spatio-temporal visual cues to isolate static and spatio-temporal regions of interest (ROIs). We then use weakly supervised learning to train deep network classifiers using the ROIs as input. More specifically, we combine multiple instance learning (MIL) with convolutional neural networks (CNNs) to select discriminative action cues. This yields classifiers for static images, using the static ROIs, as well as classifiers for short image sequences (16 frames), using spatio-temporal ROIs. Extensive experiments performed on the UCF101 and HMDB51 benchmarks show that both these types of classifiers perform well individually and achieve state of the art performance when combined together.
[ "rois", "classifiers", "action sequences", "static", "spatial", "action recognition", "critical features", "actions", "image" ]
https://openreview.net/pdf?id=Sy9g8Lrj
https://openreview.net/forum?id=Sy9g8Lrj
B1KJkxJ2
review
1,473,343,201,365
Sy9g8Lrj
[ "everyone" ]
[ "(anonymous)" ]
ECCV2016.org/BNMW
2016
title: Review rating: 7: Good paper, accept review: --- Paper summary: --- This work proposes to perform action recognition using the most discriminative local video region (or frame), rather than using the whole video. First, local object proposals and video tubes are extracted and corresponding features are computed. Then, the most discriminative region is recovered with a weakly-supervised MIL approach, while jointly training the action classifier. Experimental results are competitive to current approaches such as two-stream networks. --- Paper strengths: --- + The authors propose to only focus on the most discriminative foreground region of a video for action recognition, which is intuitive and follows some related works, e.g. (Bhattacharya, ICMR 2014). + Experimental results are encouraging. Especially on HMDB51, results are good compared to two-stream networks. Is this because HMDB51 has noisier background, which is problematic for other works? --- Paper weaknesses: --- - The authors propose their own video tubes, while there is a whole body of literature on this topic. Why did the authors opt for this approach? Some missed refs: -- Action localization with tubelets from motion, Jain et al. CVPR 2014. -- Finding action tubes, Gkioxari et al. CVPR 2015. -- Fast action proposals for human action detection and search, Yu et al. CVPR 15. -- Spatio-temporal object detection proposals, Oneata et al. ECCV 2014. This similarly holds for works that aim for action recognition using the most discriminative subregion(s), e.g.: -- Representing videos using mid-level discriminative patches, Jain et al. CVPR 2013. -- Learning discriminative space–time action parts from weakly labelled videos, Sapienza et al. IJCV 2014. - Some details and choices are missing: -- In the spatial MIL, each frame is an instance. However, up to 20 edgeboxes are extracted per frame. Does this mean that each box is an instance, or are the boxes aggregated into frame-level features? -- How is the MIL implemented? Is it trained end-to-end in a network, or done separately, e.g. through MIL-SVM? What are the main parameters? This is missing from the MIL subsection on page 2. -- Why are the video tubes 16 frames? It seems inspired by trajectories, but trajectories are meant to be local evidence, not to represent a complete action. confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct
HyKlCR4j
Autonomous driving challenge: To Infer the property of a dynamicobject based on its motion pattern
[ "Mona Fathollahi", "Rangachar Kasturi" ]
In autonomous driving applications a critical challenge is to identify action to take to avoid an obstacle on collision course. For example, when a heavy object is suddenly encountered it is critical to stop the vehicle or change the lane even if it causes other traffic disruptions. However, there are situations when it is preferable to collide with the object rather than take an action that would result in a much more serious accident than collision with the object. For example, a heavy object which falls from a truck should be avoided whereas a bouncing ball or a soft target such as a foam box need not be. We present a novel method to discriminate between the motion characteristics of these types of objects based on their physical properties such as bounciness, elasticity, etc. In this preliminary work, we use recurrent neural network with LSTM cells to train a classifier to classify objects based on their motion trajectories. We test the algorithm on synthetic data, and, as a proof of concept, demonstrate its effectiveness on a limited set of real-world data.
[ "challenge", "property", "dynamicobject", "action", "example", "heavy object", "object", "objects", "autonomous", "motion pattern autonomous" ]
https://openreview.net/pdf?id=HyKlCR4j
https://openreview.net/forum?id=HyKlCR4j
Hk-Vq03j
review
1,473,206,825,330
HyKlCR4j
[ "everyone" ]
[ "~Basura_Fernando1" ]
ECCV2016.org/BNMW
2016
title: Review for Autonomous driving challenge: To Infer the property of a dynamicobject based on its motion pattern rating: 10: Top 5% of accepted papers, seminal paper review: Paper investigates how to identify the action that needs to be taken to avoid an obstacle on collision course of autonomous driving. This is a very important and interesting problem. When an obstacle is detected in the planned path, either its planned route should be modified or the vehicle should come to stop. Depending on the traffic situation and vehicle speed, this policy could cause collision to other vehicles. Therefore, obstacle avoidance may not always be the safest action. Paper propose to use a classifier to infer object’s bounciness characteristic based on its trajectory when it hits the ground. The approach is based on the observation that the bouncing pattern of the objects is directly affected by their mass. Paper collects some synthetic data using Blender and simulates the trajectory patterns of heavy and light objects. This way paper generates some video data and train a binary classifier using RNN/LSTM to classify the video sequences. This is a very interesting idea which combines the Physics of real world objects with Computer vision. Clearly, this work is novel and has so much potential to extend. It would be interesting to see if these simulations can be somehow augment the real data. I think one of the real potential of this work lies in the ability to generate video data using a Physics simulators and use CNNs/RNNs/LSTMs to learn the Physics of real world. This is very cool idea. confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
H1q36jBj
Do Motion Boundaries Improve Semantic Segmentation?
[ "Yu-Hui Huang", "Jose Oramas", "Tinne Tuytelaars", "Luc Van Gool" ]
Precise localization is crucial to many computer vision tasks. Optical flow can help by providing motion boundaries which can serve as proxy for object boundaries. This paper investigates how useful these motion boundaries are in improving semantic segmentation. As there is no dataset readily available for this task, we compute the motion boundary maps with a pre-trained model from Weinzaepfel et al. (CVPR 2015) on the CamVid dataset. With these motion boundary maps and the corresponding RGB images, we train a convolutional neural network end-to-end, for the task of semantic segmentation. The experimental results show that the network has learned to incorporate the motion boundaries and that these improve the object localization.
[ "motion boundaries", "semantic segmentation", "task", "motion boundary maps", "precise localization", "crucial", "optical flow", "proxy", "object boundaries" ]
https://openreview.net/pdf?id=H1q36jBj
https://openreview.net/forum?id=H1q36jBj
S195ZlCj
review
1,473,278,354,010
H1q36jBj
[ "everyone" ]
[ "~Jan_C_van_Gemert1" ]
ECCV2016.org/BNMW
2016
title: Review rating: 7: Good paper, accept review: 1. Paper Summary. The paper proposes to add motion boundaries as extra information for semantic segmentation. Motion boundaries from the method of [17] (trained on a different dataset) are added to the input frames. A SegNet [1] is trained on this representation. Experiments show that motion boundaries improve segmentation results. 2. Paper Strengths. + thoroughly evaluated + Results improve + Good analysis 3. Paper Weaknesses. - only applicable to semantic segmentation in video 4. Preliminary Rating. Poster confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct