title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
MatConvNet - Convolutional Neural Networks for MATLAB | cs.CV cs.LG cs.MS cs.NE | MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for
MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility.
It exposes the building blocks of CNNs as easy-to-use MATLAB functions,
providing routines for computing linear convolutions with filter banks, feature
pooling, and many more. In this manner, MatConvNet allows fast prototyping of
new CNN architectures; at the same time, it supports efficient computation on
CPU and GPU allowing to train complex models on large datasets such as ImageNet
ILSVRC. This document provides an overview of CNNs and how they are implemented
in MatConvNet and gives the technical details of each computational block in
the toolbox.
| Andrea Vedaldi, Karel Lenc | null | 1412.4564 | null | null |
Finding a sparse vector in a subspace: Linear sparsity using alternating
directions | cs.IT cs.CV cs.LG math.IT math.OC stat.ML | Is it possible to find the sparsest vector (direction) in a generic subspace
$\mathcal{S} \subseteq \mathbb{R}^p$ with $\mathrm{dim}(\mathcal{S})= n < p$?
This problem can be considered a homogeneous variant of the sparse recovery
problem, and finds connections to sparse dictionary learning, sparse PCA, and
many other problems in signal processing and machine learning. In this paper,
we focus on a **planted sparse model** for the subspace: the target sparse
vector is embedded in an otherwise random subspace. Simple convex heuristics
for this planted recovery problem provably break down when the fraction of
nonzero entries in the target sparse vector substantially exceeds
$O(1/\sqrt{n})$. In contrast, we exhibit a relatively simple nonconvex approach
based on alternating directions, which provably succeeds even when the fraction
of nonzero entries is $\Omega(1)$. To the best of our knowledge, this is the
first practical algorithm to achieve linear scaling under the planted sparse
model. Empirically, our proposed algorithm also succeeds in more challenging
data models, e.g., sparse dictionary learning.
| Qing Qu, Ju Sun, John Wright | 10.1109/TIT.2016.2601599 | 1412.4659 | null | null |
On the Inductive Bias of Dropout | cs.LG cs.AI cs.NE math.ST stat.ML stat.TH | Dropout is a simple but effective technique for learning in neural networks
and other settings. A sound theoretical understanding of dropout is needed to
determine when dropout should be applied and how to use it most effectively. In
this paper we continue the exploration of dropout as a regularizer pioneered by
Wager, et.al. We focus on linear classification where a convex proxy to the
misclassification loss (i.e. the logistic loss used in logistic regression) is
minimized. We show: (a) when the dropout-regularized criterion has a unique
minimizer, (b) when the dropout-regularization penalty goes to infinity with
the weights, and when it remains bounded, (c) that the dropout regularization
can be non-monotonic as individual weights increase from 0, and (d) that the
dropout regularization penalty may not be convex. This last point is
particularly surprising because the combination of dropout regularization with
any convex loss proxy is always a convex function.
In order to contrast dropout regularization with $L_2$ regularization, we
formalize the notion of when different sources are more compatible with
different regularizers. We then exhibit distributions that are provably more
compatible with dropout regularization than $L_2$ regularization, and vice
versa. These sources provide additional insight into how the inductive biases
of dropout and $L_2$ regularization differ. We provide some similar results for
$L_1$ regularization.
| David P. Helmbold and Philip M. Long | null | 1412.4736 | null | null |
Max-Margin based Discriminative Feature Learning | cs.LG | In this paper, we propose a new max-margin based discriminative feature
learning method. Specifically, we aim at learning a low-dimensional feature
representation, so as to maximize the global margin of the data and make the
samples from the same class as close as possible. In order to enhance the
robustness to noise, a $l_{2,1}$ norm constraint is introduced to make the
transformation matrix in group sparsity. In addition, for multi-class
classification tasks, we further intend to learn and leverage the correlation
relationships among multiple class tasks for assisting in learning
discriminative features. The experimental results demonstrate the power of the
proposed method against the related state-of-the-art methods.
| Changsheng Li and Qingshan Liu and Weishan Dong and Xin Zhang and Lin
Yang | null | 1412.4863 | null | null |
Learning with Pseudo-Ensembles | stat.ML cs.LG cs.NE | We formalize the notion of a pseudo-ensemble, a (possibly infinite)
collection of child models spawned from a parent model by perturbing it
according to some noise process. E.g., dropout (Hinton et. al, 2012) in a deep
neural network trains a pseudo-ensemble of child subnetworks generated by
randomly masking nodes in the parent network. We present a novel regularizer
based on making the behavior of a pseudo-ensemble robust with respect to the
noise process generating it. In the fully-supervised setting, our regularizer
matches the performance of dropout. But, unlike dropout, our regularizer
naturally extends to the semi-supervised setting, where it produces
state-of-the-art results. We provide a case study in which we transform the
Recursive Neural Tensor Network of (Socher et. al, 2013) into a
pseudo-ensemble, which significantly improves its performance on a real-world
sentiment analysis benchmark.
| Philip Bachman and Ouais Alsharif and Doina Precup | null | 1412.4864 | null | null |
A Scalable Asynchronous Distributed Algorithm for Topic Modeling | cs.DC cs.IR cs.LG | Learning meaningful topic models with massive document collections which
contain millions of documents and billions of tokens is challenging because of
two reasons: First, one needs to deal with a large number of topics (typically
in the order of thousands). Second, one needs a scalable and efficient way of
distributing the computation across multiple machines. In this paper we present
a novel algorithm F+Nomad LDA which simultaneously tackles both these problems.
In order to handle large number of topics we use an appropriately modified
Fenwick tree. This data structure allows us to sample from a multinomial
distribution over $T$ items in $O(\log T)$ time. Moreover, when topic counts
change the data structure can be updated in $O(\log T)$ time. In order to
distribute the computation across multiple processor we present a novel
asynchronous framework inspired by the Nomad algorithm of
\cite{YunYuHsietal13}. We show that F+Nomad LDA significantly outperform
state-of-the-art on massive problems which involve millions of documents,
billions of words, and thousands of topics.
| Hsiang-Fu Yu and Cho-Jui Hsieh and Hyokun Yun and S.V.N Vishwanathan
and Inderjit S. Dhillon | null | 1412.4986 | null | null |
Towards Deep Neural Network Architectures Robust to Adversarial Examples | cs.LG cs.CV cs.NE | Recent work has shown deep neural networks (DNNs) to be highly susceptible to
well-designed, small perturbations at the input layer, or so-called adversarial
examples. Taking images as an example, such distortions are often
imperceptible, but can result in 100% mis-classification for a state of the art
DNN. We study the structure of adversarial examples and explore network
topology, pre-processing and training strategies to improve the robustness of
DNNs. We perform various experiments to assess the removability of adversarial
examples by corrupting with additional noise and pre-processing with denoising
autoencoders (DAEs). We find that DAEs can remove substantial amounts of the
adversarial noise. How- ever, when stacking the DAE with the original DNN, the
resulting network can again be attacked by new adversarial examples with even
smaller distortion. As a solution, we propose Deep Contractive Network, a model
with a new end-to-end training procedure that includes a smoothness penalty
inspired by the contractive autoencoder (CAE). This increases the network
robustness to adversarial examples, without a significant performance penalty.
| Shixiang Gu, Luca Rigazio | null | 1412.5068 | null | null |
Random Forests Can Hash | cs.CV cs.IR cs.LG stat.ML | Hash codes are a very efficient data representation needed to be able to cope
with the ever growing amounts of data. We introduce a random forest semantic
hashing scheme with information-theoretic code aggregation, showing for the
first time how random forest, a technique that together with deep learning have
shown spectacular results in classification, can also be extended to
large-scale retrieval. Traditional random forest fails to enforce the
consistency of hashes generated from each tree for the same class data, i.e.,
to preserve the underlying similarity, and it also lacks a principled way for
code aggregation across trees. We start with a simple hashing scheme, where
independently trained random trees in a forest are acting as hashing functions.
We the propose a subspace model as the splitting function, and show that it
enforces the hash consistency in a tree for data from the same class. We also
introduce an information-theoretic approach for aggregating codes of individual
trees into a single hash code, producing a near-optimal unique hash for each
class. Experiments on large-scale public datasets are presented, showing that
the proposed approach significantly outperforms state-of-the-art hashing
methods for retrieval tasks.
| Qiang Qiu, Guillermo Sapiro, Alex Bronstein | null | 1412.5083 | null | null |
Locally Scale-Invariant Convolutional Neural Networks | cs.CV cs.LG cs.NE | Convolutional Neural Networks (ConvNets) have shown excellent results on many
visual classification tasks. With the exception of ImageNet, these datasets are
carefully crafted such that objects are well-aligned at similar scales.
Naturally, the feature learning problem gets more challenging as the amount of
variation in the data increases, as the models have to learn to be invariant to
certain changes in appearance. Recent results on the ImageNet dataset show that
given enough data, ConvNets can learn such invariances producing very
discriminative features [1]. But could we do more: use less parameters, less
data, learn more discriminative features, if certain invariances were built
into the learning process? In this paper we present a simple model that allows
ConvNets to learn features in a locally scale-invariant manner without
increasing the number of model parameters. We show on a modified MNIST dataset
that when faced with scale variation, building in scale-invariance allows
ConvNets to learn more discriminative features with reduced chances of
over-fitting.
| Angjoo Kanazawa, Abhishek Sharma, David Jacobs | null | 1412.5104 | null | null |
Testing MCMC code | cs.SE cs.LG stat.ML | Markov Chain Monte Carlo (MCMC) algorithms are a workhorse of probabilistic
modeling and inference, but are difficult to debug, and are prone to silent
failure if implemented naively. We outline several strategies for testing the
correctness of MCMC algorithms. Specifically, we advocate writing code in a
modular way, where conditional probability calculations are kept separate from
the logic of the sampler. We discuss strategies for both unit testing and
integration testing. As a running example, we show how a Python implementation
of Gibbs sampling for a mixture of Gaussians model can be tested.
| Roger B. Grosse and David K. Duvenaud | null | 1412.5218 | null | null |
The supervised hierarchical Dirichlet process | stat.ML cs.LG | We propose the supervised hierarchical Dirichlet process (sHDP), a
nonparametric generative model for the joint distribution of a group of
observations and a response variable directly associated with that whole group.
We compare the sHDP with another leading method for regression on grouped data,
the supervised latent Dirichlet allocation (sLDA) model. We evaluate our method
on two real-world classification problems and two real-world regression
problems. Bayesian nonparametric regression models based on the Dirichlet
process, such as the Dirichlet process-generalised linear models (DP-GLM) have
previously been explored; these models allow flexibility in modelling nonlinear
relationships. However, until now, Hierarchical Dirichlet Process (HDP)
mixtures have not seen significant use in supervised problems with grouped data
since a straightforward application of the HDP on the grouped data results in
learnt clusters that are not predictive of the responses. The sHDP solves this
problem by allowing for clusters to be learnt jointly from the group structure
and from the label assigned to each group.
| Andrew M. Dai, Amos J. Storkey | 10.1109/TPAMI.2014.2315802 | 1412.5236 | null | null |
Learning unbiased features | cs.LG cs.AI cs.NE stat.ML | A key element in transfer learning is representation learning; if
representations can be developed that expose the relevant factors underlying
the data, then new tasks and domains can be learned readily based on mappings
of these salient factors. We propose that an important aim for these
representations are to be unbiased. Different forms of representation learning
can be derived from alternative definitions of unwanted bias, e.g., bias to
particular tasks, domains, or irrelevant underlying data dimensions. One very
useful approach to estimating the amount of bias in a representation comes from
maximum mean discrepancy (MMD) [5], a measure of distance between probability
distributions. We are not the first to suggest that MMD can be a useful
criterion in developing representations that apply across multiple domains or
tasks [1]. However, in this paper we describe a number of novel applications of
this criterion that we have devised, all based on the idea of developing
unbiased representations. These formulations include: a standard domain
adaptation framework; a method of learning invariant representations; an
approach based on noise-insensitive autoencoders; and a novel form of
generative model.
| Yujia Li, Kevin Swersky, Richard Zemel | null | 1412.5244 | null | null |
Consistency Analysis of an Empirical Minimum Error Entropy Algorithm | cs.LG stat.ML | In this paper we study the consistency of an empirical minimum error entropy
(MEE) algorithm in a regression setting. We introduce two types of consistency.
The error entropy consistency, which requires the error entropy of the learned
function to approximate the minimum error entropy, is shown to be always true
if the bandwidth parameter tends to 0 at an appropriate rate. The regression
consistency, which requires the learned function to approximate the regression
function, however, is a complicated issue. We prove that the error entropy
consistency implies the regression consistency for homoskedastic models where
the noise is independent of the input variable. But for heteroskedastic models,
a counterexample is used to show that the two types of consistency do not
coincide. A surprising result is that the regression consistency is always
true, provided that the bandwidth parameter tends to infinity at an appropriate
rate. Regression consistency of two classes of special models is shown to hold
with fixed bandwidth parameter, which further illustrates the complexity of
regression consistency of MEE. Fourier transform plays crucial roles in our
analysis.
| Jun Fan and Ting Hu and Qiang Wu and Ding-Xuan Zhou | null | 1412.5272 | null | null |
Ensemble of Generative and Discriminative Techniques for Sentiment
Analysis of Movie Reviews | cs.CL cs.IR cs.LG cs.NE | Sentiment analysis is a common task in natural language processing that aims
to detect polarity of a text document (typically a consumer review). In the
simplest settings, we discriminate only between positive and negative
sentiment, turning the task into a standard binary classification problem. We
compare several ma- chine learning approaches to this problem, and combine them
to achieve the best possible results. We show how to use for this task the
standard generative lan- guage models, which are slightly complementary to the
state of the art techniques. We achieve strong results on a well-known dataset
of IMDB movie reviews. Our results are easily reproducible, as we publish also
the code needed to repeat the experiments. This should simplify further advance
of the state of the art, as other researchers can combine their techniques with
ours with little effort.
| Gr\'egoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, Yoshua Bengio | null | 1412.5335 | null | null |
Flattened Convolutional Neural Networks for Feedforward Acceleration | cs.NE cs.LG | We present flattened convolutional neural networks that are designed for fast
feedforward execution. The redundancy of the parameters, especially weights of
the convolutional filters in convolutional neural networks has been extensively
studied and different heuristics have been proposed to construct a low rank
basis of the filters after training. In this work, we train flattened networks
that consist of consecutive sequence of one-dimensional filters across all
directions in 3D space to obtain comparable performance as conventional
convolutional networks. We tested flattened model on different datasets and
found that the flattened layer can effectively substitute for the 3D filters
without loss of accuracy. The flattened convolution pipelines provide around
two times speed-up during feedforward pass compared to the baseline model due
to the significant reduction of learning parameters. Furthermore, the proposed
method does not require efforts in manual tuning or post processing once the
model is trained.
| Jonghoon Jin, Aysegul Dundar, Eugenio Culurciello | null | 1412.5474 | null | null |
Deep Speech: Scaling up end-to-end speech recognition | cs.CL cs.LG cs.NE | We present a state-of-the-art speech recognition system developed using
end-to-end deep learning. Our architecture is significantly simpler than
traditional speech systems, which rely on laboriously engineered processing
pipelines; these traditional systems also tend to perform poorly when used in
noisy environments. In contrast, our system does not need hand-designed
components to model background noise, reverberation, or speaker variation, but
instead directly learns a function that is robust to such effects. We do not
need a phoneme dictionary, nor even the concept of a "phoneme." Key to our
approach is a well-optimized RNN training system that uses multiple GPUs, as
well as a set of novel data synthesis techniques that allow us to efficiently
obtain a large amount of varied data for training. Our system, called Deep
Speech, outperforms previously published results on the widely studied
Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech
also handles challenging noisy environments better than widely used,
state-of-the-art commercial speech systems.
| Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos,
Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates and
Andrew Y. Ng | null | 1412.5567 | null | null |
Learning from Data with Heterogeneous Noise using SGD | cs.LG | We consider learning from data of variable quality that may be obtained from
different heterogeneous sources. Addressing learning from heterogeneous data in
its full generality is a challenging problem. In this paper, we adopt instead a
model in which data is observed through heterogeneous noise, where the noise
level reflects the quality of the data source. We study how to use stochastic
gradient algorithms to learn in this model. Our study is motivated by two
concrete examples where this problem arises naturally: learning with local
differential privacy based on data from multiple sources with different privacy
requirements, and learning from data with labels of variable quality.
The main contribution of this paper is to identify how heterogeneous noise
impacts performance. We show that given two datasets with heterogeneous noise,
the order in which to use them in standard SGD depends on the learning rate. We
propose a method for changing the learning rate as a function of the
heterogeneity, and prove new regret bounds for our method in two cases of
interest. Experiments on real data show that our method performs better than
using a single learning rate and using only the less noisy of the two datasets
when the noise level is low to moderate.
| Shuang Song, Kamalika Chaudhuri, Anand D. Sarwate | null | 1412.5617 | null | null |
Feature extraction from complex networks: A case of study in genomic
sequences classification | cs.CE cs.LG q-bio.QM | This work presents a new approach for classification of genomic sequences
from measurements of complex networks and information theory. For this, it is
considered the nucleotides, dinucleotides and trinucleotides of a genomic
sequence. For each of them, the entropy, sum entropy and maximum entropy values
are calculated.For each of them is also generated a network, in which the nodes
are the nucleotides, dinucleotides or trinucleotides and its edges are
estimated by observing the respective adjacency among them in the genomic
sequence. In this way, it is generated three networks, for which measures of
complex networks are extracted.These measures together with measures of
information theory comprise a feature vector representing a genomic sequence.
Thus, the feature vector is used for classification by methods such as SVM,
MultiLayer Perceptron, J48, IBK, Naive Bayes and Random Forest in order to
evaluate the proposed approach.It was adopted coding sequences, intergenic
sequences and TSS (Transcriptional Starter Sites) as datasets, for which the
better results were obtained by the Random Forest with 91.2%, followed by J48
with 89.1% and SVM with 84.8% of accuracy. These results indicate that the new
approach of feature extraction has its value, reaching good levels of
classification even considering only the genomic sequences, i.e., no other a
priori knowledge about them is considered.
| Bruno Mendes Moro Conque and Andr\'e Yoshiaki Kashiwabara and
Fabr\'icio Martins Lopes | null | 1412.5627 | null | null |
Effective sampling for large-scale automated writing evaluation systems | cs.CL cs.LG | Automated writing evaluation (AWE) has been shown to be an effective
mechanism for quickly providing feedback to students. It has already seen wide
adoption in enterprise-scale applications and is starting to be adopted in
large-scale contexts. Training an AWE model has historically required a single
batch of several hundred writing examples and human scores for each of them.
This requirement limits large-scale adoption of AWE since human-scoring essays
is costly. Here we evaluate algorithms for ensuring that AWE models are
consistently trained using the most informative essays. Our results show how to
minimize training set sizes while maximizing predictive performance, thereby
reducing cost without unduly sacrificing accuracy. We conclude with a
discussion of how to integrate this approach into large-scale AWE systems.
| Nicholas Dronen, Peter W. Foltz, Kyle Habermehl | null | 1412.5659 | null | null |
Entity-Augmented Distributional Semantics for Discourse Relations | cs.CL cs.LG | Discourse relations bind smaller linguistic elements into coherent texts.
However, automatically identifying discourse relations is difficult, because it
requires understanding the semantics of the linked sentences. A more subtle
challenge is that it is not enough to represent the meaning of each sentence of
a discourse relation, because the relation may depend on links between
lower-level elements, such as entity mentions. Our solution computes
distributional meaning representations by composition up the syntactic parse
tree. A key difference from previous work on compositional distributional
semantics is that we also compute representations for entity mentions, using a
novel downward compositional pass. Discourse relations are predicted not only
from the distributional representations of the sentences, but also of their
coreferent entity mentions. The resulting system obtains substantial
improvements over the previous state-of-the-art in predicting implicit
discourse relations in the Penn Discourse Treebank.
| Yangfeng Ji and Jacob Eisenstein | null | 1412.5673 | null | null |
Multiobjective Optimization of Classifiers by Means of 3-D Convex Hull
Based Evolutionary Algorithm | cs.NE cs.LG | Finding a good classifier is a multiobjective optimization problem with
different error rates and the costs to be minimized. The receiver operating
characteristic is widely used in the machine learning community to analyze the
performance of parametric classifiers or sets of Pareto optimal classifiers. In
order to directly compare two sets of classifiers the area (or volume) under
the convex hull can be used as a scalar indicator for the performance of a set
of classifiers in receiver operating characteristic space.
Recently, the convex hull based multiobjective genetic programming algorithm
was proposed and successfully applied to maximize the convex hull area for
binary classification problems. The contribution of this paper is to extend
this algorithm for dealing with higher dimensional problem formulations. In
particular, we discuss problems where parsimony (or classifier complexity) is
stated as a third objective and multi-class classification with three different
true classification rates to be maximized.
The design of the algorithm proposed in this paper is inspired by
indicator-based evolutionary algorithms, where first a performance indicator
for a solution set is established and then a selection operator is designed
that complies with the performance indicator. In this case, the performance
indicator will be the volume under the convex hull. The algorithm is tested and
analyzed in a proof of concept study on different benchmarks that are designed
for measuring its capability to capture relevant parts of a convex hull.
Further benchmark and application studies on email classification and feature
selection round up the analysis and assess robustness and usefulness of the new
algorithm in real world settings.
| Jiaqi Zhao, Vitor Basto Fernandes, Licheng Jiao, Iryna Yevseyeva, Asep
Maulana, Rui Li, Thomas B\"ack, and Michael T. M. Emmerich | null | 1412.5710 | null | null |
An Algorithm for Online K-Means Clustering | cs.DS cs.LG | This paper shows that one can be competitive with the k-means objective while
operating online. In this model, the algorithm receives vectors v_1,...,v_n one
by one in an arbitrary order. For each vector the algorithm outputs a cluster
identifier before receiving the next one. Our online algorithm generates ~O(k)
clusters whose k-means cost is ~O(W*). Here, W* is the optimal k-means cost
using k clusters and ~O suppresses poly-logarithmic factors. We also show that,
experimentally, it is not much worse than k-means++ while operating in a
strictly more constrained computational model.
| Edo Liberty, Ram Sriharsha, Maxim Sviridenko | null | 1412.5721 | null | null |
Dynamic Structure Embedded Online Multiple-Output Regression for Stream
Data | cs.LG | Online multiple-output regression is an important machine learning technique
for modeling, predicting, and compressing multi-dimensional correlated data
streams. In this paper, we propose a novel online multiple-output regression
method, called MORES, for stream data. MORES can \emph{dynamically} learn the
structure of the coefficients change in each update step to facilitate the
model's continuous refinement. We observe that limited expressive ability of
the regression model, especially in the preliminary stage of online update,
often leads to the variables in the residual errors being dependent. In light
of this point, MORES intends to \emph{dynamically} learn and leverage the
structure of the residual errors to improve the prediction accuracy. Moreover,
we define three statistical variables to \emph{exactly} represent all the seen
samples for \emph{incrementally} calculating prediction loss in each online
update round, which can avoid loading all the training data into memory for
updating model, and also effectively prevent drastic fluctuation of the model
in the presence of noise. Furthermore, we introduce a forgetting factor to set
different weights on samples so as to track the data streams' evolving
characteristics quickly from the latest samples. Experiments on one synthetic
dataset and three real-world datasets validate the effectiveness of the
proposed method. In addition, the update speed of MORES is at least 2000
samples processed per second on the three real-world datasets, more than 15
times faster than the state-of-the-art online learning algorithm.
| Changsheng Li and Fan Wei and Weishan Dong and Qingshan Liu and
Xiangfeng Wang and Xin Zhang | null | 1412.5732 | null | null |
Stochastic Descent Analysis of Representation Learning Algorithms | stat.ML cs.LG | Although stochastic approximation learning methods have been widely used in
the machine learning literature for over 50 years, formal theoretical analyses
of specific machine learning algorithms are less common because stochastic
approximation theorems typically possess assumptions which are difficult to
communicate and verify. This paper presents a new stochastic approximation
theorem for state-dependent noise with easily verifiable assumptions applicable
to the analysis and design of important deep learning algorithms including:
adaptive learning, contrastive divergence learning, stochastic descent
expectation maximization, and active learning.
| Richard M. Golden | null | 1412.5744 | null | null |
On the Stability of Deep Networks | stat.ML cs.IT cs.LG cs.NE math.IT math.MG | In this work we study the properties of deep neural networks (DNN) with
random weights. We formally prove that these networks perform a
distance-preserving embedding of the data. Based on this we then draw
conclusions on the size of the training data and the networks' structure. A
longer version of this paper with more results and details can be found in
(Giryes et al., 2015). In particular, we formally prove in the longer version
that DNN with random Gaussian weights perform a distance-preserving embedding
of the data, with a special treatment for in-class and out-of-class data.
| Raja Giryes and Guillermo Sapiro and Alex M. Bronstein | null | 1412.5896 | null | null |
Nearest Descent, In-Tree, and Clustering | cs.LG cs.CV | In this paper, we propose a physically inspired graph-theoretical clustering
method, which first makes the data points organized into an attractive graph,
called In-Tree, via a physically inspired rule, called Nearest Descent (ND). In
particular, the rule of ND works to select the nearest node in the descending
direction of potential as the parent node of each node, which is in essence
different from the classical Gradient Descent or Steepest Descent. The
constructed In-Tree proves a very good candidate for clustering due to its
particular features and properties. In the In-Tree, the original clustering
problem is reduced to a problem of removing a very few of undesired edges from
this graph. Pleasingly, the undesired edges in In-Tree are so distinguishable
that they can be easily determined in either automatic or interactive way,
which is in stark contrast to the cases in the widely used Minimal Spanning
Tree and k-nearest-neighbor graph. The cluster number in the proposed method
can be easily determined based on some intermediate plots, and the cluster
assignment for each node is easily made by quickly searching its root node in
each sub-graph (also an In-Tree). The proposed method is extensively evaluated
on both synthetic and real-world datasets. Overall, the proposed clustering
method is a density-based one, but shows significant differences and advantages
in comparison to the traditional ones. The proposed method is simple yet
efficient and reliable, and is applicable to various datasets with diverse
shapes, attributes and any high dimensionality
| Teng Qiu, Kaifu Yang, Chaoyi Li, Yongjie Li | null | 1412.5902 | null | null |
Large Scale Distributed Distance Metric Learning | cs.LG | In large scale machine learning and data mining problems with high feature
dimensionality, the Euclidean distance between data points can be
uninformative, and Distance Metric Learning (DML) is often desired to learn a
proper similarity measure (using side information such as example data pairs
being similar or dissimilar). However, high dimensionality and large volume of
pairwise constraints in modern big data can lead to prohibitive computational
cost for both the original DML formulation in Xing et al. (2002) and later
extensions. In this paper, we present a distributed algorithm for DML, and a
large-scale implementation on a parameter server architecture. Our approach
builds on a parallelizable reformulation of Xing et al. (2002), and an
asynchronous stochastic gradient descent optimization procedure. To our
knowledge, this is the first distributed solution to DML, and we show that, on
a system with 256 CPU cores, our program is able to complete a DML task on a
dataset with 1 million data points, 22-thousand features, and 200 million
labeled data pairs, in 15 hours; and the learned metric shows great
effectiveness in properly measuring distances.
| Pengtao Xie and Eric Xing | null | 1412.5949 | null | null |
Tag-Aware Ordinal Sparse Factor Analysis for Learning and Content
Analytics | stat.ML cs.LG | Machine learning offers novel ways and means to design personalized learning
systems wherein each student's educational experience is customized in real
time depending on their background, learning goals, and performance to date.
SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based
learning analytics, which estimates a learner's knowledge of the concepts
underlying a domain, and content analytics, which estimates the relationships
among a collection of questions and those concepts. SPARFA jointly learns the
associations among the questions and the concepts, learner concept knowledge
profiles, and the underlying question difficulties, solely based on the
correct/incorrect graded responses of a population of learners to a collection
of questions. In this paper, we extend the SPARFA framework significantly to
enable: (i) the analysis of graded responses on an ordinal scale (partial
credit) rather than a binary scale (correct/incorrect); (ii) the exploitation
of tags/labels for questions that partially describe the question{concept
associations. The resulting Ordinal SPARFA-Tag framework greatly enhances the
interpretability of the estimated concepts. We demonstrate using real
educational data that Ordinal SPARFA-Tag outperforms both SPARFA and existing
collaborative filtering techniques in predicting missing learner responses.
| Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk | null | 1412.5967 | null | null |
Quantized Matrix Completion for Personalized Learning | stat.ML cs.LG | The recently proposed SPARse Factor Analysis (SPARFA) framework for
personalized learning performs factor analysis on ordinal or binary-valued
(e.g., correct/incorrect) graded learner responses to questions. The underlying
factors are termed "concepts" (or knowledge components) and are used for
learning analytics (LA), the estimation of learner concept-knowledge profiles,
and for content analytics (CA), the estimation of question-concept associations
and question difficulties. While SPARFA is a powerful tool for LA and CA, it
requires a number of algorithm parameters (including the number of concepts),
which are difficult to determine in practice. In this paper, we propose
SPARFA-Lite, a convex optimization-based method for LA that builds on matrix
completion, which only requires a single algorithm parameter and enables us to
automatically identify the required number of concepts. Using a variety of
educational datasets, we demonstrate that SPARFALite (i) achieves comparable
performance in predicting unobserved learner responses to existing methods,
including item response theory (IRT) and SPARFA, and (ii) is computationally
more efficient.
| Andrew S. Lan, Christoph Studer, Richard G. Baraniuk | null | 1412.5968 | null | null |
Automatic Training Data Synthesis for Handwriting Recognition Using the
Structural Crossing-Over Technique | cs.CV cs.LG | The paper presents a novel technique called "Structural Crossing-Over" to
synthesize qualified data for training machine learning-based handwriting
recognition. The proposed technique can provide a greater variety of patterns
of training data than the existing approaches such as elastic distortion and
tangent-based affine transformation. A couple of training characters are
chosen, then they are analyzed by their similar and different structures, and
finally are crossed over to generate the new characters. The experiments are
set to compare the performances of tangent-based affine transformation and the
proposed approach in terms of the variety of generated characters and percent
of recognition errors. The standard MNIST corpus including 60,000 training
characters and 10,000 test characters is employed in the experiments. The
proposed technique uses 1,000 characters to synthesize 60,000 characters, and
then uses these data to train and test the benchmark handwriting recognition
system that exploits Histogram of Gradient (HOG) as features and Support Vector
Machine (SVM) as recognizer. The experimental result yields 8.06% of errors. It
significantly outperforms the tangent-based affine transformation and the
original MNIST training data, which are 11.74% and 16.55%, respectively.
| Sirisak Visessenee, Sanparith Marukatat, and Rachada Kongkachandra | null | 1412.6018 | null | null |
Generative Deep Deconvolutional Learning | stat.ML cs.LG | A generative Bayesian model is developed for deep (multi-layer) convolutional
dictionary learning. A novel probabilistic pooling operation is integrated into
the deep model, yielding efficient bottom-up and top-down probabilistic
learning. After learning the deep convolutional dictionary, testing is
implemented via deconvolutional inference. To speed up this inference, a new
statistical approach is proposed to project the top-layer dictionary elements
to the data level. Following this, only one layer of deconvolution is required
during testing. Experimental results demonstrate powerful capabilities of the
model to learn multi-layer features from images. Excellent classification
results are obtained on both the MNIST and Caltech 101 datasets.
| Yunchen Pu, Xin Yuan and Lawrence Carin | null | 1412.6039 | null | null |
Learning Temporal Dependencies in Data Using a DBN-BLSTM | cs.LG cs.NE | Since the advent of deep learning, it has been used to solve various problems
using many different architectures. The application of such deep architectures
to auditory data is also not uncommon. However, these architectures do not
always adequately consider the temporal dependencies in data. We thus propose a
new generic architecture called the Deep Belief Network - Bidirectional Long
Short-Term Memory (DBN-BLSTM) network that models sequences by keeping track of
the temporal information while enabling deep representations in the data. We
demonstrate this new architecture by applying it to the task of music
generation and obtain state-of-the-art results.
| Kratarth Goel and Raunaq Vohra | null | 1412.6093 | null | null |
Theoretical and Numerical Analysis of Approximate Dynamic Programming
with Approximation Errors | cs.SY cs.LG math.OC stat.ML | This study is aimed at answering the famous question of how the approximation
errors at each iteration of Approximate Dynamic Programming (ADP) affect the
quality of the final results considering the fact that errors at each iteration
affect the next iteration. To this goal, convergence of Value Iteration scheme
of ADP for deterministic nonlinear optimal control problems with undiscounted
cost functions is investigated while considering the errors existing in
approximating respective functions. The boundedness of the results around the
optimal solution is obtained based on quantities which are known in a general
optimal control problem and assumptions which are verifiable. Moreover, since
the presence of the approximation errors leads to the deviation of the results
from optimality, sufficient conditions for stability of the system operated by
the result obtained after a finite number of value iterations, along with an
estimation of its region of attraction, are derived in terms of a calculable
upper bound of the control approximation error. Finally, the process of
implementation of the method on an orbital maneuver problem is investigated
through which the assumptions made in the theoretical developments are verified
and the sufficient conditions are applied for guaranteeing stability and near
optimality.
| Ali Heydari | null | 1412.6095 | null | null |
Compressing Deep Convolutional Networks using Vector Quantization | cs.CV cs.LG cs.NE | Deep convolutional neural networks (CNN) has become the most promising method
for object recognition, repeatedly demonstrating record breaking results for
image classification and object detection in recent years. However, a very deep
CNN generally involves many layers with millions of parameters, making the
storage of the network model to be extremely large. This prohibits the usage of
deep CNNs on resource limited hardware, especially cell phones or other
embedded devices. In this paper, we tackle this model storage issue by
investigating information theoretical vector quantization methods for
compressing the parameters of CNNs. In particular, we have found in terms of
compressing the most storage demanding dense connected layers, vector
quantization methods have a clear gain over existing matrix factorization
methods. Simply applying k-means clustering to the weights or conducting
product quantization can lead to a very good balance between model size and
recognition accuracy. For the 1000-category classification task in the ImageNet
challenge, we are able to achieve 16-24 times compression of the network with
only 1% loss of classification accuracy using the state-of-the-art CNN.
| Yunchao Gong and Liu Liu and Ming Yang and Lubomir Bourdev | null | 1412.6115 | null | null |
Efficient Decision-Making by Volume-Conserving Physical Object | cs.AI cs.LG nlin.AO physics.data-an | We demonstrate that any physical object, as long as its volume is conserved
when coupled with suitable operations, provides a sophisticated decision-making
capability. We consider the problem of finding, as accurately and quickly as
possible, the most profitable option from a set of options that gives
stochastic rewards. These decisions are made as dictated by a physical object,
which is moved in a manner similar to the fluctuations of a rigid body in a
tug-of-war game. Our analytical calculations validate statistical reasons why
our method exhibits higher efficiency than conventional algorithms.
| Song-Ju Kim, Masashi Aono, and Etsushi Nameda | 10.1088/1367-2630/17/8/083023 | 1412.6141 | null | null |
Example Selection For Dictionary Learning | cs.LG cs.AI stat.ML | In unsupervised learning, an unbiased uniform sampling strategy is typically
used, in order that the learned features faithfully encode the statistical
structure of the training data. In this work, we explore whether active example
selection strategies - algorithms that select which examples to use, based on
the current estimate of the features - can accelerate learning. Specifically,
we investigate effects of heuristic and saliency-inspired selection algorithms
on the dictionary learning task with sparse activations. We show that some
selection algorithms do improve the speed of learning, and we speculate on why
they might work.
| Tomoki Tsuchida and Garrison W. Cottrell | null | 1412.6177 | null | null |
Crypto-Nets: Neural Networks over Encrypted Data | cs.LG cs.CR cs.NE | The problem we address is the following: how can a user employ a predictive
model that is held by a third party, without compromising private information.
For example, a hospital may wish to use a cloud service to predict the
readmission risk of a patient. However, due to regulations, the patient's
medical files cannot be revealed. The goal is to make an inference using the
model, without jeopardizing the accuracy of the prediction or the privacy of
the data.
To achieve high accuracy, we use neural networks, which have been shown to
outperform other learning models for many tasks. To achieve the privacy
requirements, we use homomorphic encryption in the following protocol: the data
owner encrypts the data and sends the ciphertexts to the third party to obtain
a prediction from a trained model. The model operates on these ciphertexts and
sends back the encrypted prediction. In this protocol, not only the data
remains private, even the values predicted are available only to the data
owner.
Using homomorphic encryption and modifications to the activation functions
and training algorithms of neural networks, we show that it is protocol is
possible and may be feasible. This method paves the way to build a secure
cloud-based neural network prediction services without invading users' privacy.
| Pengtao Xie and Misha Bilenko and Tom Finley and Ran Gilad-Bachrach
and Kristin Lauter and Michael Naehrig | null | 1412.6181 | null | null |
Multiple Authors Detection: A Quantitative Analysis of Dream of the Red
Chamber | cs.LG cs.CL | Inspired by the authorship controversy of Dream of the Red Chamber and the
application of machine learning in the study of literary stylometry, we develop
a rigorous new method for the mathematical analysis of authorship by testing
for a so-called chrono-divide in writing styles. Our method incorporates some
of the latest advances in the study of authorship attribution, particularly
techniques from support vector machines. By introducing the notion of relative
frequency as a feature ranking metric our method proves to be highly effective
and robust.
Applying our method to the Cheng-Gao version of Dream of the Red Chamber has
led to convincing if not irrefutable evidence that the first $80$ chapters and
the last $40$ chapters of the book were written by two different authors.
Furthermore, our analysis has unexpectedly provided strong support to the
hypothesis that Chapter 67 was not the work of Cao Xueqin either.
We have also tested our method to the other three Great Classical Novels in
Chinese. As expected no chrono-divides have been found. This provides further
evidence of the robustness of our method.
| Xianfeng Hu, Yang Wang and Qiang Wu | 10.1142/S1793536914500125 | 1412.6211 | null | null |
Purine: A bi-graph based deep learning framework | cs.NE cs.LG | In this paper, we introduce a novel deep learning framework, termed Purine.
In Purine, a deep network is expressed as a bipartite graph (bi-graph), which
is composed of interconnected operators and data tensors. With the bi-graph
abstraction, networks are easily solvable with event-driven task dispatcher. We
then demonstrate that different parallelism schemes over GPUs and/or CPUs on
single or multiple PCs can be universally implemented by graph composition.
This eases researchers from coding for various parallelization schemes, and the
same dispatcher can be used for solving variant graphs. Scheduled by the task
dispatcher, memory transfers are fully overlapped with other computations,
which greatly reduce the communication overhead and help us achieve approximate
linear acceleration.
| Min Lin, Shuo Li, Xuan Luo, Shuicheng Yan | null | 1412.6249 | null | null |
Gradual training of deep denoising auto encoders | cs.LG cs.NE | Stacked denoising auto encoders (DAEs) are well known to learn useful deep
representations, which can be used to improve supervised training by
initializing a deep network. We investigate a training scheme of a deep DAE,
where DAE layers are gradually added and keep adapting as additional layers are
added. We show that in the regime of mid-sized datasets, this gradual training
provides a small but consistent improvement over stacked training in both
reconstruction quality and classification error over stacked training on MNIST
and CIFAR datasets.
| Alexander Kalmanovich and Gal Chechik | null | 1412.6257 | null | null |
From dependency to causality: a machine learning approach | cs.LG cs.AI stat.ML | The relationship between statistical dependency and causality lies at the
heart of all statistical approaches to causal inference. Recent results in the
ChaLearn cause-effect pair challenge have shown that causal directionality can
be inferred with good accuracy also in Markov indistinguishable configurations
thanks to data driven approaches. This paper proposes a supervised machine
learning approach to infer the existence of a directed causal link between two
variables in multivariate settings with $n>2$ variables. The approach relies on
the asymmetry of some conditional (in)dependence relations between the members
of the Markov blankets of two variables causally connected. Our results show
that supervised learning methods may be successfully used to extract causal
information on the basis of asymmetric statistical descriptors also for $n>2$
variate distributions.
| Gianluca Bontempi and Maxime Flauder | null | 1412.6285 | null | null |
Regression with Linear Factored Functions | cs.LG stat.ML | Many applications that use empirically estimated functions face a curse of
dimensionality, because the integrals over most function classes must be
approximated by sampling. This paper introduces a novel regression-algorithm
that learns linear factored functions (LFF). This class of functions has
structural properties that allow to analytically solve certain integrals and to
calculate point-wise products. Applications like belief propagation and
reinforcement learning can exploit these properties to break the curse and
speed up computation. We derive a regularized greedy optimization scheme, that
learns factored basis functions during training. The novel regression algorithm
performs competitively to Gaussian processes on benchmark tasks, and the
learned LFF functions are with 4-9 factored basis functions on average very
compact.
| Wendelin B\"ohmer and Klaus Obermayer | null | 1412.6286 | null | null |
Generative Modeling of Convolutional Neural Networks | cs.CV cs.LG cs.NE | The convolutional neural networks (CNNs) have proven to be a powerful tool
for discriminative learning. Recently researchers have also started to show
interest in the generative aspects of CNNs in order to gain a deeper
understanding of what they have learned and how to further improve them. This
paper investigates generative modeling of CNNs. The main contributions include:
(1) We construct a generative model for the CNN in the form of exponential
tilting of a reference distribution. (2) We propose a generative gradient for
pre-training CNNs by a non-parametric importance sampling scheme, which is
fundamentally different from the commonly used discriminative gradient, and yet
has the same computational architecture and cost as the latter. (3) We propose
a generative visualization method for the CNNs by sampling from an explicit
parametric image distribution. The proposed visualization method can directly
draw synthetic samples for any given node in a trained CNN by the Hamiltonian
Monte Carlo (HMC) algorithm, without resorting to any extra hold-out images.
Experiments on the challenging ImageNet benchmark show that the proposed
generative gradient pre-training consistently helps improve the performances of
CNNs, and the proposed generative visualization method generates meaningful and
varied samples of synthetic images from a large-scale deep CNN.
| Jifeng Dai, Yang Lu, Ying-Nian Wu | null | 1412.6296 | null | null |
Distributed Decision Trees | cs.LG stat.ML | Recently proposed budding tree is a decision tree algorithm in which every
node is part internal node and part leaf. This allows representing every
decision tree in a continuous parameter space, and therefore a budding tree can
be jointly trained with backpropagation, like a neural network. Even though
this continuity allows it to be used in hierarchical representation learning,
the learned representations are local: Activation makes a soft selection among
all root-to-leaf paths in a tree. In this work we extend the budding tree and
propose the distributed tree where the children use different and independent
splits and hence multiple paths in a tree can be traversed at the same time.
This ability to combine multiple paths gives the power of a distributed
representation, as in a traditional perceptron layer. We show that distributed
trees perform comparably or better than budding and traditional hard trees on
classification and regression tasks.
| Ozan \.Irsoy, Ethem Alpayd{\i}n | null | 1412.6388 | null | null |
Inducing Semantic Representation from Text by Jointly Predicting and
Factorizing Relations | cs.CL cs.LG stat.ML | In this work, we propose a new method to integrate two recent lines of work:
unsupervised induction of shallow semantics (e.g., semantic roles) and
factorization of relations in text and knowledge bases. Our model consists of
two components: (1) an encoding component: a semantic role labeling model which
predicts roles given a rich set of syntactic and lexical features; (2) a
reconstruction component: a tensor factorization model which relies on roles to
predict argument fillers. When the components are estimated jointly to minimize
errors in argument reconstruction, the induced roles largely correspond to
roles defined in annotated resources. Our method performs on par with most
accurate role induction methods on English, even though, unlike these previous
approaches, we do not incorporate any prior linguistic knowledge about the
language.
| Ivan Titov and Ehsan Khoddam | null | 1412.6418 | null | null |
Grounding Hierarchical Reinforcement Learning Models for Knowledge
Transfer | cs.LG cs.AI cs.RO | Methods of deep machine learning enable to to reuse low-level representations
efficiently for generating more abstract high-level representations.
Originally, deep learning has been applied passively (e.g., for classification
purposes). Recently, it has been extended to estimate the value of actions for
autonomous agents within the framework of reinforcement learning (RL). Explicit
models of the environment can be learned to augment such a value function.
Although "flat" connectionist methods have already been used for model-based
RL, up to now, only model-free variants of RL have been equipped with methods
from deep learning. We propose a variant of deep model-based RL that enables an
agent to learn arbitrarily abstract hierarchical representations of its
environment. In this paper, we present research on how such hierarchical
representations can be grounded in sensorimotor interaction between an agent
and its environment.
| Mark Wernsdorfer, Ute Schmid | null | 1412.6451 | null | null |
Algorithmic Robustness for Learning via $(\epsilon, \gamma, \tau)$-Good
Similarity Functions | cs.LG | The notion of metric plays a key role in machine learning problems such as
classification, clustering or ranking. However, it is worth noting that there
is a severe lack of theoretical guarantees that can be expected on the
generalization capacity of the classifier associated to a given metric. The
theoretical framework of $(\epsilon, \gamma, \tau)$-good similarity functions
(Balcan et al., 2008) has been one of the first attempts to draw a link between
the properties of a similarity function and those of a linear classifier making
use of it. In this paper, we extend and complete this theory by providing a new
generalization bound for the associated classifier based on the algorithmic
robustness framework.
| Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, \'Eric Gaussier and
Massih-Reza Amini | null | 1412.6452 | null | null |
A la Carte - Learning Fast Kernels | cs.LG stat.ML | Kernel methods have great promise for learning rich statistical
representations of large modern datasets. However, compared to neural networks,
kernel methods have been perceived as lacking in scalability and flexibility.
We introduce a family of fast, flexible, lightly parametrized and general
purpose kernel learning methods, derived from Fastfood basis function
expansions. We provide mechanisms to learn the properties of groups of spectral
frequencies in these expansions, which require only O(mlogd) time and O(m)
memory, for m basis functions and d input dimensions. We show that the proposed
methods can learn a wide class of kernels, outperforming the alternatives in
accuracy, speed, and memory consumption.
| Zichao Yang and Alexander J. Smola and Le Song and Andrew Gordon
Wilson | null | 1412.6493 | null | null |
Detecting Epileptic Seizures from EEG Data using Neural Networks | cs.LG cs.NE q-bio.NC | We explore the use of neural networks trained with dropout in predicting
epileptic seizures from electroencephalographic data (scalp EEG). The input to
the neural network is a 126 feature vector containing 9 features for each of
the 14 EEG channels obtained over 1-second, non-overlapping windows. The models
in our experiments achieved high sensitivity and specificity on patient records
not used in the training process. This is demonstrated using
leave-one-out-cross-validation across patient records, where we hold out one
patient's record as the test set and use all other patients' records for
training; repeating this procedure for all patients in the database.
| Siddharth Pramod, Adam Page, Tinoosh Mohsenin and Tim Oates | null | 1412.6502 | null | null |
Cauchy Principal Component Analysis | cs.LG stat.ML | Principal Component Analysis (PCA) has wide applications in machine learning,
text mining and computer vision. Classical PCA based on a Gaussian noise model
is fragile to noise of large magnitude. Laplace noise assumption based PCA
methods cannot deal with dense noise effectively. In this paper, we propose
Cauchy Principal Component Analysis (Cauchy PCA), a very simple yet effective
PCA method which is robust to various types of noise. We utilize Cauchy
distribution to model noise and derive Cauchy PCA under the maximum likelihood
estimation (MLE) framework with low rank constraint. Our method can robustly
estimate the low rank matrix regardless of whether noise is large or small,
dense or sparse. We analyze the robustness of Cauchy PCA from a robust
statistics view and present an efficient singular value projection optimization
method. Experimental results on both simulated data and real applications
demonstrate the robustness of Cauchy PCA to various noise patterns.
| Pengtao Xie and Eric Xing | null | 1412.6506 | null | null |
Score Function Features for Discriminative Learning | cs.LG stat.ML | Feature learning forms the cornerstone for tackling challenging learning
problems in domains such as speech, computer vision and natural language
processing. In this paper, we consider a novel class of matrix and
tensor-valued features, which can be pre-trained using unlabeled samples. We
present efficient algorithms for extracting discriminative information, given
these pre-trained features and labeled samples for any related task. Our class
of features are based on higher-order score functions, which capture local
variations in the probability density function of the input. We establish a
theoretical framework to characterize the nature of discriminative information
that can be extracted from score-function features, when used in conjunction
with labeled samples. We employ efficient spectral decomposition algorithms (on
matrices and tensors) for extracting discriminative components. The advantage
of employing tensor-valued features is that we can extract richer
discriminative information in the form of an overcomplete representations.
Thus, we present a novel framework for employing generative models of the input
for discriminative learning.
| Majid Janzamin and Hanie Sedghi and Anima Anandkumar | null | 1412.6514 | null | null |
Qualitatively characterizing neural network optimization problems | cs.NE cs.LG stat.ML | Training neural networks involves solving large-scale non-convex optimization
problems. This task has long been believed to be extremely difficult, with fear
of local minima and other obstacles motivating a variety of schemes to improve
optimization, such as unsupervised pretraining. However, modern neural networks
are able to achieve negligible training error on complex tasks, using only
direct training with stochastic gradient descent. We introduce a simple
analysis technique to look for evidence that such networks are overcoming local
optima. We find that, in fact, on a straight path from initialization to
solution, a variety of state of the art neural networks never encounter any
significant obstacles.
| Ian J. Goodfellow, Oriol Vinyals, and Andrew M. Saxe | null | 1412.6544 | null | null |
Fast Label Embeddings via Randomized Linear Algebra | cs.LG | Many modern multiclass and multilabel problems are characterized by
increasingly large output spaces. For these problems, label embeddings have
been shown to be a useful primitive that can improve computational and
statistical efficiency. In this work we utilize a correspondence between rank
constrained estimation and low dimensional label embeddings that uncovers a
fast label embedding algorithm which works in both the multiclass and
multilabel settings. The result is a randomized algorithm whose running time is
exponentially faster than naive algorithms. We demonstrate our techniques on
two large-scale public datasets, from the Large Scale Hierarchical Text
Challenge and the Open Directory Project, where we obtain state of the art
results.
| Paul Mineiro and Nikos Karampatziakis | null | 1412.6547 | null | null |
FitNets: Hints for Thin Deep Nets | cs.LG cs.NE | While depth tends to improve network performances, it also makes
gradient-based training more difficult since deeper networks tend to be more
non-linear. The recently proposed knowledge distillation approach is aimed at
obtaining small and fast-to-execute models, and it has shown that a student
network could imitate the soft output of a larger teacher network or ensemble
of networks. In this paper, we extend this idea to allow the training of a
student that is deeper and thinner than the teacher, using not only the outputs
but also the intermediate representations learned by the teacher as hints to
improve the training process and final performance of the student. Because the
student intermediate hidden layer will generally be smaller than the teacher's
intermediate hidden layer, additional parameters are introduced to map the
student hidden layer to the prediction of the teacher hidden layer. This allows
one to train deeper students that can generalize better or run faster, a
trade-off that is controlled by the chosen student capacity. For example, on
CIFAR-10, a deep student network with almost 10.4 times less parameters
outperforms a larger, state-of-the-art teacher network.
| Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine
Chassang, Carlo Gatta and Yoshua Bengio | null | 1412.6550 | null | null |
Speeding-up Convolutional Neural Networks Using Fine-tuned
CP-Decomposition | cs.CV cs.LG | We propose a simple two-step approach for speeding up convolution layers
within large convolutional neural networks based on tensor decomposition and
discriminative fine-tuning. Given a layer, we use non-linear least squares to
compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a
sum of a small number of rank-one tensors. At the second step, this
decomposition is used to replace the original convolutional layer with a
sequence of four convolutional layers with small kernels. After such
replacement, the entire network is fine-tuned on the training data using
standard backpropagation process.
We evaluate this approach on two CNNs and show that it is competitive with
previous approaches, leading to higher obtained CPU speedups at the cost of
lower accuracy drops for the smaller of the two networks. Thus, for the
36-class character classification CNN, our approach obtains a 8.5x CPU speedup
of the whole network with only minor accuracy drop (1% from 91% to 90%). For
the standard ImageNet architecture (AlexNet), the approach speeds up the second
convolution layer by a factor of 4x at the cost of $1\%$ increase of the
overall top-5 classification error.
| Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, Victor
Lempitsky | null | 1412.6553 | null | null |
Random Walk Initialization for Training Very Deep Feedforward Networks | cs.NE cs.LG stat.ML | Training very deep networks is an important open problem in machine learning.
One of many difficulties is that the norm of the back-propagated error gradient
can grow or decay exponentially. Here we show that training very deep
feed-forward networks (FFNs) is not as difficult as previously thought. Unlike
when back-propagation is applied to a recurrent network, application to an FFN
amounts to multiplying the error gradient by a different random matrix at each
layer. We show that the successive application of correctly scaled random
matrices to an initial vector results in a random walk of the log of the norm
of the resulting vectors, and we compute the scaling that makes this walk
unbiased. The variance of the random walk grows only linearly with network
depth and is inversely proportional to the size of each layer. Practically,
this implies a gradient whose log-norm scales with the square root of the
network depth and shows that the vanishing gradient problem can be mitigated by
increasing the width of the layers. Mathematical analyses and experimental
results using stochastic gradient descent to optimize tasks related to the
MNIST and TIMIT datasets are provided to support these claims. Equations for
the optimal matrix scaling are provided for the linear and ReLU cases.
| David Sussillo, L.F. Abbott | null | 1412.6558 | null | null |
Self-informed neural network structure learning | stat.ML cs.CV cs.LG cs.NE | We study the problem of large scale, multi-label visual recognition with a
large number of possible classes. We propose a method for augmenting a trained
neural network classifier with auxiliary capacity in a manner designed to
significantly improve upon an already well-performing model, while minimally
impacting its computational footprint. Using the predictions of the network
itself as a descriptor for assessing visual similarity, we define a
partitioning of the label space into groups of visually similar entities. We
then augment the network with auxilliary hidden layer pathways with
connectivity only to these groups of label units. We report a significant
improvement in mean average precision on a large-scale object recognition task
with the augmented model, while increasing the number of multiply-adds by less
than 3%.
| David Warde-Farley, Andrew Rabinovich, Dragomir Anguelov | null | 1412.6563 | null | null |
Move Evaluation in Go Using Deep Convolutional Neural Networks | cs.LG cs.NE | The game of Go is more challenging than other board games, due to the
difficulty of constructing a position or move evaluation function. In this
paper we investigate whether deep convolutional networks can be used to
directly represent and learn this knowledge. We train a large 12-layer
convolutional neural network by supervised learning from a database of human
professional games. The network correctly predicts the expert move in 55% of
positions, equalling the accuracy of a 6 dan human player. When the trained
convolutional network was used directly to play games of Go, without any
search, it beat the traditional search program GnuGo in 97% of games, and
matched the performance of a state-of-the-art Monte-Carlo tree search that
simulates a million positions per move.
| Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver | null | 1412.6564 | null | null |
Improving zero-shot learning by mitigating the hubness problem | cs.CL cs.LG | The zero-shot paradigm exploits vector-based word representations extracted
from text corpora with unsupervised methods to learn general mapping functions
from other feature spaces onto word space, where the words associated to the
nearest neighbours of the mapped vectors are used as their linguistic labels.
We show that the neighbourhoods of the mapped elements are strongly polluted by
hubs, vectors that tend to be near a high proportion of items, pushing their
correct labels down the neighbour list. After illustrating the problem
empirically, we propose a simple method to correct it by taking the proximity
distribution of potential neighbours across many mapped vectors into account.
We show that this correction leads to consistent improvements in realistic
zero-shot experiments in the cross-lingual, image labeling and image retrieval
domains.
| Georgiana Dinu, Angeliki Lazaridou, Marco Baroni | null | 1412.6568 | null | null |
Explaining and Harnessing Adversarial Examples | stat.ML cs.LG | Several machine learning models, including neural networks, consistently
misclassify adversarial examples---inputs formed by applying small but
intentionally worst-case perturbations to examples from the dataset, such that
the perturbed input results in the model outputting an incorrect answer with
high confidence. Early attempts at explaining this phenomenon focused on
nonlinearity and overfitting. We argue instead that the primary cause of neural
networks' vulnerability to adversarial perturbation is their linear nature.
This explanation is supported by new quantitative results while giving the
first explanation of the most intriguing fact about them: their generalization
across architectures and training sets. Moreover, this view yields a simple and
fast method of generating adversarial examples. Using this approach to provide
examples for adversarial training, we reduce the test set error of a maxout
network on the MNIST dataset.
| Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy | null | 1412.6572 | null | null |
Modeling Compositionality with Multiplicative Recurrent Neural Networks | cs.LG cs.CL stat.ML | We present the multiplicative recurrent neural network as a general model for
compositional meaning in language, and evaluate it on the task of fine-grained
sentiment analysis. We establish a connection to the previously investigated
matrix-space models for compositionality, and show they are special cases of
the multiplicative recurrent net. Our experiments show that these models
perform comparably or better than Elman-type additive recurrent neural networks
and outperform matrix-space models on a standard fine-grained sentiment
analysis corpus. Furthermore, they yield comparable results to structural deep
models on the recently published Stanford Sentiment Treebank without the need
for generating parse trees.
| Ozan \.Irsoy, Claire Cardie | null | 1412.6577 | null | null |
Variational Recurrent Auto-Encoders | stat.ML cs.LG cs.NE | In this paper we propose a model that combines the strengths of RNNs and
SGVB: the Variational Recurrent Auto-Encoder (VRAE). Such a model can be used
for efficient, large scale unsupervised learning on time series data, mapping
the time series data to a latent vector representation. The model is
generative, such that data can be generated from samples of the latent space.
An important contribution of this work is that the model can make use of
unlabeled data in order to facilitate supervised training of RNNs by
initialising the weights and network state.
| Otto Fabius, Joost R. van Amersfoort | null | 1412.6581 | null | null |
Discovering Hidden Factors of Variation in Deep Networks | cs.LG cs.CV cs.NE | Deep learning has enjoyed a great deal of success because of its ability to
learn useful features for tasks such as classification. But there has been less
exploration in learning the factors of variation apart from the classification
signal. By augmenting autoencoders with simple regularization terms during
training, we demonstrate that standard deep architectures can discover and
explicitly represent factors of variation beyond those relevant for
categorization. We introduce a cross-covariance penalty (XCov) as a method to
disentangle factors like handwriting style for digits and subject identity in
faces. We demonstrate this on the MNIST handwritten digit database, the Toronto
Faces Database (TFD) and the Multi-PIE dataset by generating manipulated
instances of the data. Furthermore, we demonstrate these deep networks can
extrapolate `hidden' variation in the supervised signal.
| Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen | null | 1412.6583 | null | null |
A deep-structured fully-connected random field model for structured
inference | stat.ML cs.IT cs.LG math.IT stat.ME | There has been significant interest in the use of fully-connected graphical
models and deep-structured graphical models for the purpose of structured
inference. However, fully-connected and deep-structured graphical models have
been largely explored independently, leaving the unification of these two
concepts ripe for exploration. A fundamental challenge with unifying these two
types of models is in dealing with computational complexity. In this study, we
investigate the feasibility of unifying fully-connected and deep-structured
models in a computationally tractable manner for the purpose of structured
inference. To accomplish this, we introduce a deep-structured fully-connected
random field (DFRF) model that integrates a series of intermediate sparse
auto-encoding layers placed between state layers to significantly reduce
computational complexity. The problem of image segmentation was used to
illustrate the feasibility of using the DFRF for structured inference in a
computationally tractable manner. Results in this study show that it is
feasible to unify fully-connected and deep-structured models in a
computationally tractable manner for solving structured inference problems such
as image segmentation.
| Alexander Wong, Mohammad Javad Shafiee, Parthipan Siva, and Xiao Yu
Wang | 10.1109/ACCESS.2015.2425304 | 1412.6586 | null | null |
Training Deep Neural Networks on Noisy Labels with Bootstrapping | cs.CV cs.LG cs.NE | Current state-of-the-art deep learning systems for visual object recognition
and detection use purely supervised training with regularization such as
dropout to avoid overfitting. The performance depends critically on the amount
of labeled examples, and in current practice the labels are assumed to be
unambiguous and accurate. However, this assumption often does not hold; e.g. in
recognition, class labels may be missing; in detection, objects in the image
may not be localized; and in general, the labeling may be subjective. In this
work we propose a generic way to handle noisy and incomplete labeling by
augmenting the prediction objective with a notion of consistency. We consider a
prediction consistent if the same prediction is made given similar percepts,
where the notion of similarity is between deep network features computed from
the input data. In experiments we demonstrate that our approach yields
substantial robustness to label noise on several datasets. On MNIST handwritten
digits, we show that our model is robust to label corruption. On the Toronto
Face Database, we show that our model handles well the case of subjective
labels in emotion recognition, achieving state-of-the- art results, and can
also benefit from unlabeled face images with no modification to our method. On
the ILSVRC2014 detection challenge data, we show that our approach extends to
very deep networks, high resolution images and structured outputs, and results
in improved scalable detection.
| Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru
Erhan, Andrew Rabinovich | null | 1412.6596 | null | null |
An Analysis of Unsupervised Pre-training in Light of Recent Advances | cs.CV cs.LG cs.NE | Convolutional neural networks perform well on object recognition because of a
number of recent advances: rectified linear units (ReLUs), data augmentation,
dropout, and large labelled datasets. Unsupervised data has been proposed as
another way to improve performance. Unfortunately, unsupervised pre-training is
not used by state-of-the-art methods leading to the following question: Is
unsupervised pre-training still useful given recent advances? If so, when? We
answer this in three parts: we 1) develop an unsupervised method that
incorporates ReLUs and recent unsupervised regularization techniques, 2)
analyze the benefits of unsupervised pre-training compared to data augmentation
and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised
samples, 3) verify our findings on STL-10. We discover unsupervised
pre-training, as expected, helps when the ratio of unsupervised to supervised
samples is high, and surprisingly, hurts when the ratio is low. We also use
unsupervised pre-training with additional color augmentation to achieve near
state-of-the-art performance on STL-10.
| Tom Le Paine, Pooya Khorrami, Wei Han, Thomas S. Huang | null | 1412.6597 | null | null |
Automatic Discovery and Optimization of Parts for Image Classification | cs.CV cs.LG | Part-based representations have been shown to be very useful for image
classification. Learning part-based models is often viewed as a two-stage
problem. First, a collection of informative parts is discovered, using
heuristics that promote part distinctiveness and diversity, and then
classifiers are trained on the vector of part responses. In this paper we unify
the two stages and learn the image classifiers and a set of shared parts
jointly. We generate an initial pool of parts by randomly sampling part
candidates and selecting a good subset using L1/L2 regularization. All steps
are driven "directly" by the same objective namely the classification loss on a
training set. This lets us do away with engineered heuristics. We also
introduce the notion of "negative parts", intended as parts that are negatively
correlated with one or more classes. Negative parts are complementary to the
parts discovered by other methods, which look only for positive correlations.
| Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman and Pedro
Felzenszwalb | null | 1412.6598 | null | null |
Hot Swapping for Online Adaptation of Optimization Hyperparameters | cs.LG | We describe a general framework for online adaptation of optimization
hyperparameters by `hot swapping' their values during learning. We investigate
this approach in the context of adaptive learning rate selection using an
explore-exploit strategy from the multi-armed bandit literature. Experiments on
a benchmark neural network show that the hot swapping approach leads to
consistently better solutions compared to well-known alternatives such as
AdaDelta and stochastic gradient with exhaustive hyperparameter search.
| Kevin Bache, Dennis DeCoste, Padhraic Smyth | null | 1412.6599 | null | null |
Using Neural Networks for Click Prediction of Sponsored Search | cs.LG cs.NE | Sponsored search is a multi-billion dollar industry and makes up a major
source of revenue for search engines (SE). click-through-rate (CTR) estimation
plays a crucial role for ads selection, and greatly affects the SE revenue,
advertiser traffic and user experience. We propose a novel architecture for
solving CTR prediction problem by combining artificial neural networks (ANN)
with decision trees. First we compare ANN with respect to other popular machine
learning models being used for this task. Then we go on to combine ANN with
MatrixNet (proprietary implementation of boosted trees) and evaluate the
performance of the system as a whole. The results show that our approach
provides significant improvement over existing models.
| Afroze Ibrahim Baqapuri and Ilya Trofimov | null | 1412.6601 | null | null |
Video (language) modeling: a baseline for generative models of natural
videos | cs.LG cs.CV | We propose a strong baseline model for unsupervised feature learning using
video data. By learning to predict missing frames or extrapolate future frames
from an input video sequence, the model discovers both spatial and temporal
correlations which are useful to represent complex deformations and motion
patterns. The models we propose are largely borrowed from the language modeling
literature, and adapted to the vision domain by quantizing the space of image
patches into a large dictionary. We demonstrate the approach on both a filling
and a generation task. For the first time, we show that, after training on
natural videos, such a model can predict non-trivial motions over short video
sequences.
| MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan
Collobert, Sumit Chopra | null | 1412.6604 | null | null |
Competing with the Empirical Risk Minimizer in a Single Pass | stat.ML cs.LG | In many estimation problems, e.g. linear and logistic regression, we wish to
minimize an unknown objective given only unbiased samples of the objective
function. Furthermore, we aim to achieve this using as few samples as possible.
In the absence of computational constraints, the minimizer of a sample average
of observed data -- commonly referred to as either the empirical risk minimizer
(ERM) or the $M$-estimator -- is widely regarded as the estimation strategy of
choice due to its desirable statistical convergence properties. Our goal in
this work is to perform as well as the ERM, on every problem, while minimizing
the use of computational resources such as running time and space usage.
We provide a simple streaming algorithm which, under standard regularity
assumptions on the underlying problem, enjoys the following properties:
* The algorithm can be implemented in linear time with a single pass of the
observed data, using space linear in the size of a single sample.
* The algorithm achieves the same statistical rate of convergence as the
empirical risk minimizer on every problem, even considering constant factors.
* The algorithm's performance depends on the initial error at a rate that
decreases super-polynomially.
* The algorithm is easily parallelizable.
Moreover, we quantify the (finite-sample) rate at which the algorithm becomes
competitive with the ERM.
| Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford | null | 1412.6606 | null | null |
Scoring and Classifying with Gated Auto-encoders | cs.LG cs.NE | Auto-encoders are perhaps the best-known non-probabilistic methods for
representation learning. They are conceptually simple and easy to train. Recent
theoretical work has shed light on their ability to capture manifold structure,
and drawn connections to density modelling. This has motivated researchers to
seek ways of auto-encoder scoring, which has furthered their use in
classification. Gated auto-encoders (GAEs) are an interesting and flexible
extension of auto-encoders which can learn transformations among different
images or pixel covariances within images. However, they have been much less
studied, theoretically or empirically. In this work, we apply a dynamical
systems view to GAEs, deriving a scoring function, and drawing connections to
Restricted Boltzmann Machines. On a set of deep learning benchmarks, we also
demonstrate their effectiveness for single and multi-label classification.
| Daniel Jiwoong Im and Graham W. Taylor | null | 1412.6610 | null | null |
In Search of the Real Inductive Bias: On the Role of Implicit
Regularization in Deep Learning | cs.LG cs.AI cs.CV stat.ML | We present experiments demonstrating that some other form of capacity
control, different from network size, plays a central role in learning
multilayer feed-forward networks. We argue, partially through analogy to matrix
factorization, that this is an inductive bias that can help shed light on deep
learning.
| Behnam Neyshabur, Ryota Tomioka, Nathan Srebro | null | 1412.6614 | null | null |
Explorations on high dimensional landscapes | stat.ML cs.LG | Finding minima of a real valued non-convex function over a high dimensional
space is a major challenge in science. We provide evidence that some such
functions that are defined on high dimensional domains have a narrow band of
values whose pre-image contains the bulk of its critical points. This is in
contrast with the low dimensional picture in which this band is wide. Our
simulations agree with the previous theoretical work on spin glasses that
proves the existence of such a band when the dimension of the domain tends to
infinity. Furthermore our experiments on teacher-student networks with the
MNIST dataset establish a similar phenomenon in deep networks. We finally
observe that both the gradient descent and the stochastic gradient descent
methods can reach this level within the same number of steps.
| Levent Sagun, V. Ugur Guney, Gerard Ben Arous, Yann LeCun | null | 1412.6615 | null | null |
Outperforming Word2Vec on Analogy Tasks with Random Projections | cs.CL cs.LG | We present a distributed vector representation based on a simplification of
the BEAGLE system, designed in the context of the Sigma cognitive architecture.
Our method does not require gradient-based training of neural networks, matrix
decompositions as with LSA, or convolutions as with BEAGLE. All that is
involved is a sum of random vectors and their pointwise products. Despite the
simplicity of this technique, it gives state-of-the-art results on analogy
problems, in most cases better than Word2Vec. To explain this success, we
interpret it as a dimension reduction via random projection.
| Abram Demski, Volkan Ustun, Paul Rosenbloom, Cody Kommers | null | 1412.6616 | null | null |
Understanding Minimum Probability Flow for RBMs Under Various Kinds of
Dynamics | cs.LG | Energy-based models are popular in machine learning due to the elegance of
their formulation and their relationship to statistical physics. Among these,
the Restricted Boltzmann Machine (RBM), and its staple training algorithm
contrastive divergence (CD), have been the prototype for some recent
advancements in the unsupervised training of deep neural networks. However, CD
has limited theoretical motivation, and can in some cases produce undesirable
behavior. Here, we investigate the performance of Minimum Probability Flow
(MPF) learning for training RBMs. Unlike CD, with its focus on approximating an
intractable partition function via Gibbs sampling, MPF proposes a tractable,
consistent, objective function defined in terms of a Taylor expansion of the KL
divergence with respect to sampling dynamics. Here we propose a more general
form for the sampling dynamics in MPF, and explore the consequences of
different choices for these dynamics for training RBMs. Experimental results
show MPF outperforming CD for various RBM configurations.
| Daniel Jiwoong Im, Ethan Buchman, Graham W. Taylor | null | 1412.6617 | null | null |
Permutohedral Lattice CNNs | cs.CV cs.LG cs.NE | This paper presents a convolutional layer that is able to process sparse
input features. As an example, for image recognition problems this allows an
efficient filtering of signals that do not lie on a dense grid (like pixel
position), but of more general features (such as color values). The presented
algorithm makes use of the permutohedral lattice data structure. The
permutohedral lattice was introduced to efficiently implement a bilateral
filter, a commonly used image processing operation. Its use allows for a
generalization of the convolution type found in current (spatial) convolutional
network architectures.
| Martin Kiefel, Varun Jampani and Peter V. Gehler | null | 1412.6618 | null | null |
Why does Deep Learning work? - A perspective from Group Theory | cs.LG cs.NE stat.ML | Why does Deep Learning work? What representations does it capture? How do
higher-order representations emerge? We study these questions from the
perspective of group theory, thereby opening a new approach towards a theory of
Deep learning.
One factor behind the recent resurgence of the subject is a key algorithmic
step called pre-training: first search for a good generative model for the
input samples, and repeat the process one layer at a time. We show deeper
implications of this simple principle, by establishing a connection with the
interplay of orbits and stabilizers of group actions. Although the neural
networks themselves may not form groups, we show the existence of {\em shadow}
groups whose elements serve as close approximations.
Over the shadow groups, the pre-training step, originally introduced as a
mechanism to better initialize a network, becomes equivalent to a search for
features with minimal orbits. Intuitively, these features are in a way the {\em
simplest}. Which explains why a deep learning network learns simple features
first. Next, we show how the same principle, when repeated in the deeper
layers, can capture higher order representations, and why representation
complexity increases as the layers get deeper.
| Arnab Paul, Suresh Venkatasubramanian | null | 1412.6621 | null | null |
Deep metric learning using Triplet network | cs.LG cs.CV stat.ML | Deep learning has proven itself as a successful set of models for learning
useful semantic representations of data. These, however, are mostly implicitly
learned as part of a classification task. In this paper we propose the triplet
network model, which aims to learn useful representations by distance
comparisons. A similar model was defined by Wang et al. (2014), tailor made for
learning a ranking for image information retrieval. Here we demonstrate using
various datasets that our model learns a better representation than that of its
immediate competitor, the Siamese network. We also discuss future possible
usage as a framework for unsupervised learning.
| Elad Hoffer, Nir Ailon | null | 1412.6622 | null | null |
Word Representations via Gaussian Embedding | cs.CL cs.LG | Current work in lexical distributed representations maps each word to a point
vector in low-dimensional space. Mapping instead to a density provides many
interesting advantages, including better capturing uncertainty about a
representation and its relationships, expressing asymmetries more naturally
than dot product or cosine similarity, and enabling more expressive
parameterization of decision boundaries. This paper advocates for density-based
distributed embeddings and presents a method for learning representations in
the space of Gaussian distributions. We compare performance on various word
embedding benchmarks, investigate the ability of these embeddings to model
entailment and other asymmetric relationships, and explore novel properties of
the representation.
| Luke Vilnis, Andrew McCallum | null | 1412.6623 | null | null |
Neural Network Regularization via Robust Weight Factorization | cs.LG cs.NE stat.ML | Regularization is essential when training large neural networks. As deep
neural networks can be mathematically interpreted as universal function
approximators, they are effective at memorizing sampling noise in the training
data. This results in poor generalization to unseen data. Therefore, it is no
surprise that a new regularization technique, Dropout, was partially
responsible for the now-ubiquitous winning entry to ImageNet 2012 by the
University of Toronto. Currently, Dropout (and related methods such as
DropConnect) are the most effective means of regularizing large neural
networks. These amount to efficiently visiting a large number of related models
at training time, while aggregating them to a single predictor at test time.
The proposed FaMe model aims to apply a similar strategy, yet learns a
factorization of each weight matrix such that the factors are robust to noise.
| Jan Rudy, Weiguang Ding, Daniel Jiwoong Im, Graham W. Taylor | null | 1412.6630 | null | null |
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) | cs.CV cs.CL cs.LG | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model
for generating novel image captions. It directly models the probability
distribution of generating a word given previous words and an image. Image
captions are generated by sampling from this distribution. The model consists
of two sub-networks: a deep recurrent neural network for sentences and a deep
convolutional network for images. These two sub-networks interact with each
other in a multimodal layer to form the whole m-RNN model. The effectiveness of
our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K,
Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In
addition, we apply the m-RNN model to retrieval tasks for retrieving images or
sentences, and achieves significant performance improvement over the
state-of-the-art methods which directly optimize the ranking objective function
for retrieval. The project page of this work is:
www.stat.ucla.edu/~junhua.mao/m-RNN.html .
| Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille | null | 1412.6632 | null | null |
Weakly Supervised Multi-Embeddings Learning of Acoustic Models | cs.SD cs.CL cs.LG | We trained a Siamese network with multi-task same/different information on a
speech dataset, and found that it was possible to share a network for both
tasks without a loss in performance. The first task was to discriminate between
two same or different words, and the second was to discriminate between two
same or different talkers.
| Gabriel Synnaeve, Emmanuel Dupoux | null | 1412.6645 | null | null |
Incremental Adaptation Strategies for Neural Network Language Models | cs.NE cs.CL cs.LG | It is today acknowledged that neural network language models outperform
backoff language models in applications like speech recognition or statistical
machine translation. However, training these models on large amounts of data
can take several days. We present efficient techniques to adapt a neural
network language model to new data. Instead of training a completely new model
or relying on mixture approaches, we propose two new methods: continued
training on resampled data or insertion of adaptation layers. We present
experimental results in an CAT environment where the post-edits of professional
translators are used to improve an SMT system. Both methods are very fast and
achieve significant improvements without overfitting the small adaptation data.
| Aram Ter-Sarkisov, Holger Schwenk, Loic Barrault and Fethi Bougares | null | 1412.6650 | null | null |
Deep learning with Elastic Averaging SGD | cs.LG stat.ML | We study the problem of stochastic optimization for deep learning in the
parallel computing environment under communication constraints. A new algorithm
is proposed in this setting where the communication and coordination of work
among concurrent processes (local workers), is based on an elastic force which
links the parameters they compute with a center variable stored by the
parameter server (master). The algorithm enables the local workers to perform
more exploration, i.e. the algorithm allows the local variables to fluctuate
further from the center variable by reducing the amount of communication
between local workers and the master. We empirically demonstrate that in the
deep learning setting, due to the existence of many local optima, allowing more
exploration can lead to the improved performance. We propose synchronous and
asynchronous variants of the new algorithm. We provide the stability analysis
of the asynchronous variant in the round-robin scheme and compare it with the
more common parallelized method ADMM. We show that the stability of EASGD is
guaranteed when a simple stability condition is satisfied, which is not the
case for ADMM. We additionally propose the momentum-based version of our
algorithm that can be applied in both synchronous and asynchronous settings.
Asynchronous variant of the algorithm is applied to train convolutional neural
networks for image classification on the CIFAR and ImageNet datasets.
Experiments demonstrate that the new algorithm accelerates the training of deep
architectures compared to DOWNPOUR and other common baseline approaches and
furthermore is very communication efficient.
| Sixin Zhang, Anna Choromanska, Yann LeCun | null | 1412.6651 | null | null |
Implicit Temporal Differences | stat.ML cs.LG | In reinforcement learning, the TD($\lambda$) algorithm is a fundamental
policy evaluation method with an efficient online implementation that is
suitable for large-scale problems. One practical drawback of TD($\lambda$) is
its sensitivity to the choice of the step-size. It is an empirically well-known
fact that a large step-size leads to fast convergence, at the cost of higher
variance and risk of instability. In this work, we introduce the implicit
TD($\lambda$) algorithm which has the same function and computational cost as
TD($\lambda$), but is significantly more stable. We provide a theoretical
explanation of this stability and an empirical evaluation of implicit
TD($\lambda$) on typical benchmark tasks. Our results show that implicit
TD($\lambda$) outperforms standard TD($\lambda$) and a state-of-the-art method
that automatically tunes the step-size, and thus shows promise for wide
applicability.
| Aviv Tamar, Panos Toulis, Shie Mannor, Edoardo M. Airoldi | null | 1412.6734 | null | null |
Locally Weighted Learning for Naive Bayes Classifier | stat.ML cs.LG | As a consequence of the strong and usually violated conditional independence
assumption (CIA) of naive Bayes (NB) classifier, the performance of NB becomes
less and less favorable compared to sophisticated classifiers when the sample
size increases. We learn from this phenomenon that when the size of the
training data is large, we should either relax the assumption or apply NB to a
"reduced" data set, say for example use NB as a local model. The latter
approach trades the ignored information for the robustness to the model
assumption. In this paper, we consider using NB as a model for locally weighted
data. A special weighting function is designed so that if CIA holds for the
unweighted data, it also holds for the weighted data. The new method is
intuitive and capable of handling class imbalance. It is theoretically more
sound than the locally weighted learners of naive Bayes that base
classification only on the $k$ nearest neighbors. Empirical study shows that
the new method with appropriate choice of parameter outperforms seven existing
classifiers of similar nature.
| Kim-Hung Li and Cheuk Ting Li | null | 1412.6741 | null | null |
Correlation of Data Reconstruction Error and Shrinkages in Pair-wise
Distances under Principal Component Analysis (PCA) | cs.LG stat.ML | In this on-going work, I explore certain theoretical and empirical
implications of data transformations under the PCA. In particular, I state and
prove three theorems about PCA, which I paraphrase as follows: 1). PCA without
discarding eigenvector rows is injective, but looses this injectivity when
eigenvector rows are discarded 2). PCA without discarding eigen- vector rows
preserves pair-wise distances, but tends to cause pair-wise distances to shrink
when eigenvector rows are discarded. 3). For any pair of points, the shrinkage
in pair-wise distance is bounded above by an L1 norm reconstruction error
associated with the points. Clearly, 3). suggests that there might exist some
correlation between shrinkages in pair-wise distances and mean square
reconstruction error which is defined as the sum of those eigenvalues
associated with the discarded eigenvectors. I therefore decided to perform
numerical experiments to obtain the corre- lation between the sum of those
eigenvalues and shrinkages in pair-wise distances. In addition, I have also
performed some experiments to check respectively the effect of the sum of those
eigenvalues and the effect of the shrinkages on classification accuracies under
the PCA map. So far, I have obtained the following results on some publicly
available data from the UCI Machine Learning Repository: 1). There seems to be
a strong cor- relation between the sum of those eigenvalues associated with
discarded eigenvectors and shrinkages in pair-wise distances. 2). Neither the
sum of those eigenvalues nor pair-wise distances have any strong correlations
with classification accuracies. 1
| Abdulrahman Oladipupo Ibraheem | null | 1412.6752 | null | null |
Principal Sensitivity Analysis | stat.ML cs.LG | We present a novel algorithm (Principal Sensitivity Analysis; PSA) to analyze
the knowledge of the classifier obtained from supervised machine learning
techniques. In particular, we define principal sensitivity map (PSM) as the
direction on the input space to which the trained classifier is most sensitive,
and use analogously defined k-th PSM to define a basis for the input space. We
train neural networks with artificial data and real data, and apply the
algorithm to the obtained supervised classifiers. We then visualize the PSMs to
demonstrate the PSA's ability to decompose the knowledge acquired by the
trained classifiers.
| Sotetsu Koyamada and Masanori Koyama and Ken Nakae and Shin Ishii | 10.1007/978-3-319-18038-0_48 | 1412.6785 | null | null |
Striving for Simplicity: The All Convolutional Net | cs.LG cs.CV cs.NE | Most modern convolutional neural networks (CNNs) used for object recognition
are built using the same principles: Alternating convolution and max-pooling
layers followed by a small number of fully connected layers. We re-evaluate the
state of the art for object recognition from small images with convolutional
networks, questioning the necessity of different components in the pipeline. We
find that max-pooling can simply be replaced by a convolutional layer with
increased stride without loss in accuracy on several image recognition
benchmarks. Following this finding -- and building on other recent work for
finding simple network structures -- we propose a new architecture that
consists solely of convolutional layers and yields competitive or state of the
art performance on several object recognition datasets (CIFAR-10, CIFAR-100,
ImageNet). To analyze the network we introduce a new variant of the
"deconvolution approach" for visualizing features learned by CNNs, which can be
applied to a broader range of network structures than existing approaches.
| Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin
Riedmiller | null | 1412.6806 | null | null |
Learning the nonlinear geometry of high-dimensional data: Models and
algorithms | stat.ML cs.CV cs.LG | Modern information processing relies on the axiom that high-dimensional data
lie near low-dimensional geometric structures. This paper revisits the problem
of data-driven learning of these geometric structures and puts forth two new
nonlinear geometric models for data describing "related" objects/phenomena. The
first one of these models straddles the two extremes of the subspace model and
the union-of-subspaces model, and is termed the metric-constrained
union-of-subspaces (MC-UoS) model. The second one of these models---suited for
data drawn from a mixture of nonlinear manifolds---generalizes the kernel
subspace model, and is termed the metric-constrained kernel union-of-subspaces
(MC-KUoS) model. The main contributions of this paper in this regard include
the following. First, it motivates and formalizes the problems of MC-UoS and
MC-KUoS learning. Second, it presents algorithms that efficiently learn an
MC-UoS or an MC-KUoS underlying data of interest. Third, it extends these
algorithms to the case when parts of the data are missing. Last, but not least,
it reports the outcomes of a series of numerical experiments involving both
synthetic and real data that demonstrate the superiority of the proposed
geometric models and learning algorithms over existing approaches in the
literature. These experiments also help clarify the connections between this
work and the literature on (subspace and kernel k-means) clustering.
| Tong Wu and Waheed U. Bajwa | 10.1109/TSP.2015.2469637 | 1412.6808 | null | null |
Extraction of Salient Sentences from Labelled Documents | cs.CL cs.IR cs.LG | We present a hierarchical convolutional document model with an architecture
designed to support introspection of the document structure. Using this model,
we show how to use visualisation techniques from the computer vision literature
to identify and extract topic-relevant sentences.
We also introduce a new scalable evaluation technique for automatic sentence
extraction systems that avoids the need for time consuming human annotation of
validation data.
| Misha Denil and Alban Demiraj and Nando de Freitas | null | 1412.6815 | null | null |
A Stable Multi-Scale Kernel for Topological Machine Learning | stat.ML cs.CV cs.LG math.AT | Topological data analysis offers a rich source of valuable information to
study vision problems. Yet, so far we lack a theoretically sound connection to
popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In
this work, we establish such a connection by designing a multi-scale kernel for
persistence diagrams, a stable summary representation of topological features
in data. We show that this kernel is positive definite and prove its stability
with respect to the 1-Wasserstein distance. Experiments on two benchmark
datasets for 3D shape classification/retrieval and texture recognition show
considerable performance gains of the proposed method compared to an
alternative approach that is based on the recently introduced persistence
landscapes.
| Jan Reininghaus, Stefan Huber, Ulrich Bauer, Roland Kwitt | null | 1412.6821 | null | null |
Learning Activation Functions to Improve Deep Neural Networks | cs.NE cs.CV cs.LG stat.ML | Artificial neural networks typically have a fixed, non-linear activation
function at each neuron. We have designed a novel form of piecewise linear
activation function that is learned independently for each neuron using
gradient descent. With this adaptive activation function, we are able to
improve upon deep neural network architectures composed of static rectified
linear units, achieving state-of-the-art performance on CIFAR-10 (7.51%),
CIFAR-100 (30.83%), and a benchmark from high-energy physics involving Higgs
boson decay modes.
| Forest Agostinelli, Matthew Hoffman, Peter Sadowski, Pierre Baldi | null | 1412.6830 | null | null |
Contour Detection Using Cost-Sensitive Convolutional Neural Networks | cs.CV cs.LG cs.NE | We address the problem of contour detection via per-pixel classifications of
edge point. To facilitate the process, the proposed approach leverages with
DenseNet, an efficient implementation of multiscale convolutional neural
networks (CNNs), to extract an informative feature vector for each pixel and
uses an SVM classifier to accomplish contour detection. The main challenge lies
in adapting a pre-trained per-image CNN model for yielding per-pixel image
features. We propose to base on the DenseNet architecture to achieve pixelwise
fine-tuning and then consider a cost-sensitive strategy to further improve the
learning with a small dataset of edge and non-edge image patches. In the
experiment of contour detection, we look into the effectiveness of combining
per-pixel features from different CNN layers and obtain comparable performances
to the state-of-the-art on BSDS500.
| Jyh-Jing Hwang and Tyng-Luh Liu | null | 1412.6857 | null | null |
On Learning Vector Representations in Hierarchical Label Spaces | cs.LG cs.CL stat.ML | An important problem in multi-label classification is to capture label
patterns or underlying structures that have an impact on such patterns. This
paper addresses one such problem, namely how to exploit hierarchical structures
over labels. We present a novel method to learn vector representations of a
label space given a hierarchy of labels and label co-occurrence patterns. Our
experimental results demonstrate qualitatively that the proposed method is able
to learn regularities among labels by exploiting a label hierarchy as well as
label co-occurrences. It highlights the importance of the hierarchical
information in order to obtain regularities which facilitate analogical
reasoning over a label space. We also experimentally illustrate the dependency
of the learned representations on the label hierarchy.
| Jinseok Nam and Johannes F\"urnkranz | null | 1412.6881 | null | null |
Half-CNN: A General Framework for Whole-Image Regression | cs.CV cs.LG cs.NE | The Convolutional Neural Network (CNN) has achieved great success in image
classification. The classification model can also be utilized at image or patch
level for many other applications, such as object detection and segmentation.
In this paper, we propose a whole-image CNN regression model, by removing the
full connection layer and training the network with continuous feature maps.
This is a generic regression framework that fits many applications. We
demonstrate this method through two tasks: simultaneous face detection &
segmentation, and scene saliency prediction. The result is comparable with
other models in the respective fields, using only a small scale network. Since
the regression model is trained on corresponding image / feature map pairs,
there are no requirements on uniform input size as opposed to the
classification model. Our framework avoids classifier design, a process that
may introduce too much manual intervention in model development. Yet, it is
highly correlated to the classification network and offers some in-deep review
of CNN structures.
| Jun Yuan, Bingbing Ni, Ashraf A.Kassim | null | 1412.6885 | null | null |
Adam: A Method for Stochastic Optimization | cs.LG | We introduce Adam, an algorithm for first-order gradient-based optimization
of stochastic objective functions, based on adaptive estimates of lower-order
moments. The method is straightforward to implement, is computationally
efficient, has little memory requirements, is invariant to diagonal rescaling
of the gradients, and is well suited for problems that are large in terms of
data and/or parameters. The method is also appropriate for non-stationary
objectives and problems with very noisy and/or sparse gradients. The
hyper-parameters have intuitive interpretations and typically require little
tuning. Some connections to related algorithms, on which Adam was inspired, are
discussed. We also analyze the theoretical convergence properties of the
algorithm and provide a regret bound on the convergence rate that is comparable
to the best known results under the online convex optimization framework.
Empirical results demonstrate that Adam works well in practice and compares
favorably to other stochastic optimization methods. Finally, we discuss AdaMax,
a variant of Adam based on the infinity norm.
| Diederik P. Kingma and Jimmy Ba | null | 1412.6980 | null | null |
A Bayesian encourages dropout | cs.LG cs.NE stat.ML | Dropout is one of the key techniques to prevent the learning from
overfitting. It is explained that dropout works as a kind of modified L2
regularization. Here, we shed light on the dropout from Bayesian standpoint.
Bayesian interpretation enables us to optimize the dropout rate, which is
beneficial for learning of weight parameters and prediction after learning. The
experiment result also encourages the optimization of the dropout.
| Shin-ichi Maeda | null | 1412.7003 | null | null |
Tailoring Word Embeddings for Bilexical Predictions: An Experimental
Comparison | cs.CL cs.LG | We investigate the problem of inducing word embeddings that are tailored for
a particular bilexical relation. Our learning algorithm takes an existing
lexical vector space and compresses it such that the resulting word embeddings
are good predictors for a target bilexical relation. In experiments we show
that task-specific embeddings can benefit both the quality and efficiency in
lexical prediction tasks.
| Pranava Swaroop Madhyastha, Xavier Carreras, Ariadna Quattoni | null | 1412.7004 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.