title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Classifiers With a Reject Option for Early Time-Series Classification | cs.CV cs.LG | Early classification of time-series data in a dynamic environment is a
challenging problem of great importance in signal processing. This paper
proposes a classifier architecture with a reject option capable of online
decision making without the need to wait for the entire time series signal to
be present. The main idea is to classify an odor/gas signal with an acceptable
accuracy as early as possible. Instead of using posterior probability of a
classifier, the proposed method uses the "agreement" of an ensemble to decide
whether to accept or reject the candidate label. The introduced algorithm is
applied to the bio-chemistry problem of odor classification to build a novel
Electronic-Nose called Forefront-Nose. Experimental results on wind tunnel
test-bed facility confirms the robustness of the forefront-nose compared to the
standard classifiers from both earliness and recognition perspectives.
| Nima Hatami and Camelia Chira | null | 1312.3989 | null | null |
ECOC-Based Training of Neural Networks for Face Recognition | cs.CV cs.LG | Error Correcting Output Codes, ECOC, is an output representation method
capable of discovering some of the errors produced in classification tasks.
This paper describes the application of ECOC to the training of feed forward
neural networks, FFNN, for improving the overall accuracy of classification
systems. Indeed, to improve the generalization of FFNN classifiers, this paper
proposes an ECOC-Based training method for Neural Networks that use ECOC as the
output representation, and adopts the traditional Back-Propagation algorithm,
BP, to adjust weights of the network. Experimental results for face recognition
problem on Yale database demonstrate the effectiveness of our method. With a
rejection scheme defined by a simple robustness rate, high reliability is
achieved in this application.
| Nima Hatami, Reza Ebrahimpour, Reza Ghaderi | 10.1109/ICCIS.2008.4670763 | 1312.3990 | null | null |
Domain adaptation for sequence labeling using hidden Markov models | cs.CL cs.LG | Most natural language processing systems based on machine learning are not
robust to domain shift. For example, a state-of-the-art syntactic dependency
parser trained on Wall Street Journal sentences has an absolute drop in
performance of more than ten points when tested on textual data from the Web.
An efficient solution to make these methods more robust to domain shift is to
first learn a word representation using large amounts of unlabeled data from
both domains, and then use this representation as features in a supervised
learning algorithm. In this paper, we propose to use hidden Markov models to
learn word representations for part-of-speech tagging. In particular, we study
the influence of using data from the source, the target or both domains to
learn the representation and the different ways to represent words using an
HMM.
| Edouard Grave (LIENS, INRIA Paris - Rocquencourt), Guillaume Obozinski
(LIGM), Francis Bach (LIENS, INRIA Paris - Rocquencourt) | null | 1312.4092 | null | null |
A MapReduce based distributed SVM algorithm for binary classification | cs.LG cs.DC | Although Support Vector Machine (SVM) algorithm has a high generalization
property to classify for unseen examples after training phase and it has small
loss value, the algorithm is not suitable for real-life classification and
regression problems. SVMs cannot solve hundreds of thousands examples in
training dataset. In previous studies on distributed machine learning
algorithms, SVM is trained over a costly and preconfigured computer
environment. In this research, we present a MapReduce based distributed
parallel SVM training algorithm for binary classification problems. This work
shows how to distribute optimization problem over cloud computing systems with
MapReduce technique. In the second step of this work, we used statistical
learning theory to find the predictive hypothesis that minimize our empirical
risks from hypothesis spaces that created with reduce function of MapReduce.
The results of this research are important for training of big datasets for SVM
algorithm based classification problems. We provided that iterative training of
split dataset with MapReduce technique; accuracy of the classifier function
will converge to global optimal classifier function's accuracy in finite
iteration size. The algorithm performance was measured on samples from letter
recognition and pen-based recognition of handwritten digits dataset.
| Ferhat \"Ozg\"ur \c{C}atak, Mehmet Erdal Balaban | null | 1312.4108 | null | null |
Distributed k-means algorithm | cs.LG cs.DC | In this paper we provide a fully distributed implementation of the k-means
clustering algorithm, intended for wireless sensor networks where each agent is
endowed with a possibly high-dimensional observation (e.g., position, humidity,
temperature, etc.) The proposed algorithm, by means of one-hop communication,
partitions the agents into measure-dependent groups that have small in-group
and large out-group "distances". Since the partitions may not have a relation
with the topology of the network--members of the same clusters may not be
spatially close--the algorithm is provided with a mechanism to compute the
clusters'centroids even when the clusters are disconnected in several
sub-clusters.The results of the proposed distributed algorithm coincide, in
terms of minimization of the objective function, with the centralized k-means
algorithm. Some numerical examples illustrate the capabilities of the proposed
solution.
| Gabriele Oliva, Roberto Setola, and Christoforos N. Hadjicostis | null | 1312.4176 | null | null |
Feature Graph Architectures | cs.LG | In this article we propose feature graph architectures (FGA), which are deep
learning systems employing a structured initialisation and training method
based on a feature graph which facilitates improved generalisation performance
compared with a standard shallow architecture. The goal is to explore
alternative perspectives on the problem of deep network training. We evaluate
FGA performance for deep SVMs on some experimental datasets, and show how
generalisation and stability results may be derived for these models. We
describe the effect of permutations on the model accuracy, and give a criterion
for the optimal permutation in terms of feature correlations. The experimental
results show that the algorithm produces robust and significant test set
improvements over a standard shallow SVM training method for a range of
datasets. These gains are achieved with a moderate increase in time complexity.
| Richard Davis, Sanjay Chawla, Philip Leong | null | 1312.4209 | null | null |
Learning Factored Representations in a Deep Mixture of Experts | cs.LG | Mixtures of Experts combine the outputs of several "expert" networks, each of
which specializes in a different part of the input space. This is achieved by
training a "gating" network that maps each input to a distribution over the
experts. Such models show promise for building larger networks that are still
cheap to compute at test time, and more parallelizable at training time. In
this this work, we extend the Mixture of Experts to a stacked model, the Deep
Mixture of Experts, with multiple sets of gating and experts. This
exponentially increases the number of effective experts by associating each
input with a combination of experts at each layer, yet maintains a modest model
size. On a randomly translated version of the MNIST dataset, we find that the
Deep Mixture of Experts automatically learns to develop location-dependent
("where") experts at the first layer, and class-specific ("what") experts at
the second layer. In addition, we see that the different combinations are in
use when the model is applied to a dataset of speech monophones. These
demonstrate effective use of all expert combinations.
| David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever | null | 1312.4314 | null | null |
Rectifying Self Organizing Maps for Automatic Concept Learning from Web
Images | cs.CV cs.LG cs.NE | We attack the problem of learning concepts automatically from noisy web image
search results. Going beyond low level attributes, such as colour and texture,
we explore weakly-labelled datasets for the learning of higher level concepts,
such as scene categories. The idea is based on discovering common
characteristics shared among subsets of images by posing a method that is able
to organise the data while eliminating irrelevant instances. We propose a novel
clustering and outlier detection method, namely Rectifying Self Organizing Maps
(RSOM). Given an image collection returned for a concept query, RSOM provides
clusters pruned from outliers. Each cluster is used to train a model
representing a different characteristics of the concept. The proposed method
outperforms the state-of-the-art studies on the task of learning low-level
concepts, and it is competitive in learning higher level concepts as well. It
is capable to work at large scale with no supervision through exploiting the
available sources.
| Eren Golge and Pinar Duygulu | null | 1312.4384 | null | null |
Network In Network | cs.NE cs.CV cs.LG | We propose a novel deep network structure called "Network In Network" (NIN)
to enhance model discriminability for local patches within the receptive field.
The conventional convolutional layer uses linear filters followed by a
nonlinear activation function to scan the input. Instead, we build micro neural
networks with more complex structures to abstract the data within the receptive
field. We instantiate the micro neural network with a multilayer perceptron,
which is a potent function approximator. The feature maps are obtained by
sliding the micro networks over the input in a similar manner as CNN; they are
then fed into the next layer. Deep NIN can be implemented by stacking mutiple
of the above described structure. With enhanced local modeling via the micro
network, we are able to utilize global average pooling over feature maps in the
classification layer, which is easier to interpret and less prone to
overfitting than traditional fully connected layers. We demonstrated the
state-of-the-art classification performances with NIN on CIFAR-10 and
CIFAR-100, and reasonable performances on SVHN and MNIST datasets.
| Min Lin, Qiang Chen, Shuicheng Yan | null | 1312.4400 | null | null |
Learning Deep Representations By Distributed Random Samplings | cs.LG | In this paper, we propose an extremely simple deep model for the unsupervised
nonlinear dimensionality reduction -- deep distributed random samplings, which
performs like a stack of unsupervised bootstrap aggregating. First, its network
structure is novel: each layer of the network is a group of mutually
independent $k$-centers clusterings. Second, its learning method is extremely
simple: the $k$ centers of each clustering are only $k$ randomly selected
examples from the training data; for small-scale data sets, the $k$ centers are
further randomly reconstructed by a simple cyclic-shift operation. Experimental
results on nonlinear dimensionality reduction show that the proposed method can
learn abstract representations on both large-scale and small-scale problems,
and meanwhile is much faster than deep neural networks on large-scale problems.
| Xiao-Lei Zhang | null | 1312.4405 | null | null |
Optimization for Compressed Sensing: the Simplex Method and Kronecker
Sparsification | stat.ML cs.LG | In this paper we present two new approaches to efficiently solve large-scale
compressed sensing problems. These two ideas are independent of each other and
can therefore be used either separately or together. We consider all
possibilities.
For the first approach, we note that the zero vector can be taken as the
initial basic (infeasible) solution for the linear programming problem and
therefore, if the true signal is very sparse, some variants of the simplex
method can be expected to take only a small number of pivots to arrive at a
solution. We implemented one such variant and demonstrate a dramatic
improvement in computation time on very sparse signals.
The second approach requires a redesigned sensing mechanism in which the
vector signal is stacked into a matrix. This allows us to exploit the Kronecker
compressed sensing (KCS) mechanism. We show that the Kronecker sensing requires
stronger conditions for perfect recovery compared to the original vector
problem. However, the Kronecker sensing, modeled correctly, is a much sparser
linear optimization problem. Hence, algorithms that benefit from sparse problem
representation, such as interior-point methods, can solve the Kronecker sensing
problems much faster than the corresponding vector problem. In our numerical
studies, we demonstrate a ten-fold improvement in the computation time.
| Robert Vanderbei and Han Liu and Lie Wang and Kevin Lin | null | 1312.4426 | null | null |
Low-Rank Approximations for Conditional Feedforward Computation in Deep
Neural Networks | cs.LG | Scalability properties of deep neural networks raise key research questions,
particularly as the problems considered become larger and more challenging.
This paper expands on the idea of conditional computation introduced by Bengio,
et. al., where the nodes of a deep network are augmented by a set of gating
units that determine when a node should be calculated. By factorizing the
weight matrix into a low-rank approximation, an estimation of the sign of the
pre-nonlinearity activation can be efficiently obtained. For networks using
rectified-linear hidden units, this implies that the computation of a hidden
unit with an estimated negative pre-nonlinearity can be ommitted altogether, as
its value will become zero when nonlinearity is applied. For sparse neural
networks, this can result in considerable speed gains. Experimental results
using the MNIST and SVHN data sets with a fully-connected deep neural network
demonstrate the performance robustness of the proposed scheme with respect to
the error introduced by the conditional computation process.
| Andrew Davis, Itamar Arel | null | 1312.4461 | null | null |
Parametric Modelling of Multivariate Count Data Using Probabilistic
Graphical Models | stat.ML cs.LG stat.ME | Multivariate count data are defined as the number of items of different
categories issued from sampling within a population, which individuals are
grouped into categories. The analysis of multivariate count data is a recurrent
and crucial issue in numerous modelling problems, particularly in the fields of
biology and ecology (where the data can represent, for example, children counts
associated with multitype branching processes), sociology and econometrics. We
focus on I) Identifying categories that appear simultaneously, or on the
contrary that are mutually exclusive. This is achieved by identifying
conditional independence relationships between the variables; II)Building
parsimonious parametric models consistent with these relationships; III)
Characterising and testing the effects of covariates on the joint distribution
of the counts. To achieve these goals, we propose an approach based on
graphical probabilistic models, and more specifically partially directed
acyclic graphs.
| Pierre Fernique (VP, AGAP), Jean-Baptiste Durand (VP, INRIA Grenoble
Rh\^one-Alpes / LJK Laboratoire Jean Kuntzmann), Yann Gu\'edon (VP, AGAP) | null | 1312.4479 | null | null |
Probable convexity and its application to Correlated Topic Models | cs.LG stat.ML | Non-convex optimization problems often arise from probabilistic modeling,
such as estimation of posterior distributions. Non-convexity makes the problems
intractable, and poses various obstacles for us to design efficient algorithms.
In this work, we attack non-convexity by first introducing the concept of
\emph{probable convexity} for analyzing convexity of real functions in
practice. We then use the new concept to analyze an inference problem in the
\emph{Correlated Topic Model} (CTM) and related nonconjugate models. Contrary
to the existing belief of intractability, we show that this inference problem
is concave under certain conditions. One consequence of our analyses is a novel
algorithm for learning CTM which is significantly more scalable and qualitative
than existing methods. Finally, we highlight that stochastic gradient
algorithms might be a practical choice to resolve efficiently non-convex
problems. This finding might find beneficial in many contexts which are beyond
probabilistic modeling.
| Khoat Than and Tu Bao Ho | null | 1312.4527 | null | null |
Comparative Analysis of Viterbi Training and Maximum Likelihood
Estimation for HMMs | stat.ML cs.LG | We present an asymptotic analysis of Viterbi Training (VT) and contrast it
with a more conventional Maximum Likelihood (ML) approach to parameter
estimation in Hidden Markov Models. While ML estimator works by (locally)
maximizing the likelihood of the observed data, VT seeks to maximize the
probability of the most likely hidden state sequence. We develop an analytical
framework based on a generating function formalism and illustrate it on an
exactly solvable model of HMM with one unambiguous symbol. For this particular
model the ML objective function is continuously degenerate. VT objective, in
contrast, is shown to have only finite degeneracy. Furthermore, VT converges
faster and results in sparser (simpler) models, thus realizing an automatic
Occam's razor for HMM learning. For more general scenario VT can be worse
compared to ML but still capable of correctly recovering most of the
parameters.
| Armen E. Allahverdyan and Aram Galstyan | null | 1312.4551 | null | null |
Adaptive Stochastic Alternating Direction Method of Multipliers | stat.ML cs.LG | The Alternating Direction Method of Multipliers (ADMM) has been studied for
years. The traditional ADMM algorithm needs to compute, at each iteration, an
(empirical) expected loss function on all training examples, resulting in a
computational complexity proportional to the number of training examples. To
reduce the time complexity, stochastic ADMM algorithms were proposed to replace
the expected function with a random loss function associated with one uniformly
drawn example plus a Bregman divergence. The Bregman divergence, however, is
derived from a simple second order proximal function, the half squared norm,
which could be a suboptimal choice.
In this paper, we present a new family of stochastic ADMM algorithms with
optimal second order proximal functions, which produce a new family of adaptive
subgradient methods. We theoretically prove that their regret bounds are as
good as the bounds which could be achieved by the best proximal function that
can be chosen in hindsight. Encouraging empirical results on a variety of
real-world datasets confirm the effectiveness and efficiency of the proposed
algorithms.
| Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li | null | 1312.4564 | null | null |
Dropout improves Recurrent Neural Networks for Handwriting Recognition | cs.CV cs.LG cs.NE | Recurrent neural networks (RNNs) with Long Short-Term memory cells currently
hold the best known results in unconstrained handwriting recognition. We show
that their performance can be greatly improved using dropout - a recently
proposed regularization method for deep architectures. While previous works
showed that dropout gave superior performance in the context of convolutional
networks, it had never been applied to RNNs. In our approach, dropout is
carefully used in the network so that it does not affect the recurrent
connections, hence the power of RNNs in modeling sequence is preserved.
Extensive experiments on a broad range of handwritten databases confirm the
effectiveness of dropout on deep architectures even when the network mainly
consists of recurrent and shared connections.
| Vu Pham, Th\'eodore Bluche, Christopher Kermorvant, J\'er\^ome
Louradour | null | 1312.4569 | null | null |
Evolution and Computational Learning Theory: A survey on Valiant's paper | cs.LG | Darwin's theory of evolution is considered to be one of the greatest
scientific gems in modern science. It not only gives us a description of how
living things evolve, but also shows how a population evolves through time and
also, why only the fittest individuals continue the generation forward. The
paper basically gives a high level analysis of the works of Valiant[1]. Though,
we know the mechanisms of evolution, but it seems that there does not exist any
strong quantitative and mathematical theory of the evolution of certain
mechanisms. What is defined exactly as the fitness of an individual, why is
that only certain individuals in a population tend to mutate, how computation
is done in finite time when we have exponentially many examples: there seems to
be a lot of questions which need to be answered. [1] basically treats Darwinian
theory as a form of computational learning theory, which calculates the net
fitness of the hypotheses and thus distinguishes functions and their classes
which could be evolvable using polynomial amount of resources. Evolution is
considered as a function of the environment and the previous evolutionary
stages that chooses the best hypothesis using learning techniques that makes
mutation possible and hence, gives a quantitative idea that why only the
fittest individuals tend to survive and have the power to mutate.
| Arka Bhattacharya | null | 1312.4599 | null | null |
Compact Random Feature Maps | stat.ML cs.LG | Kernel approximation using randomized feature maps has recently gained a lot
of interest. In this work, we identify that previous approaches for polynomial
kernel approximation create maps that are rank deficient, and therefore do not
utilize the capacity of the projected feature space effectively. To address
this challenge, we propose compact random feature maps (CRAFTMaps) to
approximate polynomial kernels more concisely and accurately. We prove the
error bounds of CRAFTMaps demonstrating their superior kernel reconstruction
performance compared to the previous approximation schemes. We show how
structured random matrices can be used to efficiently generate CRAFTMaps, and
present a single-pass algorithm using CRAFTMaps to learn non-linear multi-class
classifiers. We present experiments on multiple standard data-sets with
performance competitive with state-of-the-art results.
| Raffay Hamid and Ying Xiao and Alex Gittens and Dennis DeCoste | null | 1312.4626 | null | null |
Sparse, complex-valued representations of natural sounds learned with
phase and amplitude continuity priors | cs.LG cs.SD q-bio.NC | Complex-valued sparse coding is a data representation which employs a
dictionary of two-dimensional subspaces, while imposing a sparse, factorial
prior on complex amplitudes. When trained on a dataset of natural image
patches, it learns phase invariant features which closely resemble receptive
fields of complex cells in the visual cortex. Features trained on natural
sounds however, rarely reveal phase invariance and capture other aspects of the
data. This observation is a starting point of the present work. As its first
contribution, it provides an analysis of natural sound statistics by means of
learning sparse, complex representations of short speech intervals. Secondly,
it proposes priors over the basis function set, which bias them towards
phase-invariant solutions. In this way, a dictionary of complex basis functions
can be learned from the data statistics, while preserving the phase invariance
property. Finally, representations trained on speech sounds with and without
priors are compared. Prior-based basis functions reveal performance comparable
to unconstrained sparse coding, while explicitely representing phase as a
temporal shift. Such representations can find applications in many perceptual
and machine learning tasks.
| Wiktor Mlynarski | null | 1312.4695 | null | null |
A Comparative Evaluation of Curriculum Learning with Filtering and
Boosting | cs.LG | Not all instances in a data set are equally beneficial for inferring a model
of the data. Some instances (such as outliers) are detrimental to inferring a
model of the data. Several machine learning techniques treat instances in a
data set differently during training such as curriculum learning, filtering,
and boosting. However, an automated method for determining how beneficial an
instance is for inferring a model of the data does not exist. In this paper, we
present an automated method that orders the instances in a data set by
complexity based on the their likelihood of being misclassified (instance
hardness). The underlying assumption of this method is that instances with a
high likelihood of being misclassified represent more complex concepts in a
data set. Ordering the instances in a data set allows a learning algorithm to
focus on the most beneficial instances and ignore the detrimental ones. We
compare ordering the instances in a data set in curriculum learning, filtering
and boosting. We find that ordering the instances significantly increases
classification accuracy and that filtering has the largest impact on
classification accuracy. On a set of 52 data sets, ordering the instances
increases the average accuracy from 81% to 84%.
| Michael R. Smith and Tony Martinez | null | 1312.4986 | null | null |
Efficient Online Bootstrapping for Large Scale Learning | cs.LG | Bootstrapping is a useful technique for estimating the uncertainty of a
predictor, for example, confidence intervals for prediction. It is typically
used on small to moderate sized datasets, due to its high computation cost.
This work describes a highly scalable online bootstrapping strategy,
implemented inside Vowpal Wabbit, that is several times faster than traditional
strategies. Our experiments indicate that, in addition to providing a black
box-like method for estimating uncertainty, our implementation of online
bootstrapping may also help to train models with better prediction performance
due to model averaging.
| Zhen Qin, Vaclav Petricek, Nikos Karampatziakis, Lihong Li, John
Langford | null | 1312.5021 | null | null |
Contextually Supervised Source Separation with Application to Energy
Disaggregation | stat.ML cs.LG math.OC | We propose a new framework for single-channel source separation that lies
between the fully supervised and unsupervised setting. Instead of supervision,
we provide input features for each source signal and use convex methods to
estimate the correlations between these features and the unobserved signal
decomposition. We analyze the case of $\ell_2$ loss theoretically and show that
recovery of the signal components depends only on cross-correlation between
features for different signals, not on correlations between features for the
same signal. Contextually supervised source separation is a natural fit for
domains with large amounts of data but no explicit supervision; our motivating
application is energy disaggregation of hourly smart meter data (the separation
of whole-home power signals into different energy uses). Here we apply
contextual supervision to disaggregate the energy usage of thousands homes over
four years, a significantly larger scale than previously published efforts, and
demonstrate on synthetic data that our method outperforms the unsupervised
approach.
| Matt Wytock and J. Zico Kolter | null | 1312.5023 | null | null |
Permuted NMF: A Simple Algorithm Intended to Minimize the Volume of the
Score Matrix | stat.AP cs.LG stat.ML | Non-Negative Matrix Factorization, NMF, attempts to find a number of
archetypal response profiles, or parts, such that any sample profile in the
dataset can be approximated by a close profile among these archetypes or a
linear combination of these profiles. The non-negativity constraint is imposed
while estimating archetypal profiles, due to the non-negative nature of the
observed signal. Apart from non negativity, a volume constraint can be applied
on the Score matrix W to enhance the ability of learning parts of NMF. In this
report, we describe a very simple algorithm, which in effect achieves volume
minimization, although indirectly.
| Paul Fogel | null | 1312.5124 | null | null |
The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited | stat.ML cs.LG math.OC | Hypergraphs allow one to encode higher-order relationships in data and are
thus a very flexible modeling tool. Current learning methods are either based
on approximations of the hypergraphs via graphs or on tensor methods which are
only applicable under special conditions. In this paper, we present a new
learning framework on hypergraphs which fully uses the hypergraph structure.
The key element is a family of regularization functionals based on the total
variation on hypergraphs.
| Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram | null | 1312.5179 | null | null |
Nonlinear Eigenproblems in Data Analysis - Balanced Graph Cuts and the
RatioDCA-Prox | stat.ML cs.LG math.OC | It has been recently shown that a large class of balanced graph cuts allows
for an exact relaxation into a nonlinear eigenproblem. We review briefly some
of these results and propose a family of algorithms to compute nonlinear
eigenvectors which encompasses previous work as special cases. We provide a
detailed analysis of the properties and the convergence behavior of these
algorithms and then discuss their application in the area of balanced graph
cuts.
| Leonardo Jost, Simon Setzer, Matthias Hein | null | 1312.5192 | null | null |
Learning Semantic Script Knowledge with Event Embeddings | cs.LG cs.AI cs.CL stat.ML | Induction of common sense knowledge about prototypical sequences of events
has recently received much attention. Instead of inducing this knowledge in the
form of graphs, as in much of the previous work, in our method, distributed
representations of event realizations are computed based on distributed
representations of predicates and their arguments, and then these
representations are used to predict prototypical event orderings. The
parameters of the compositional process for computing the event representations
and the ranking component of the model are jointly estimated from texts. We
show that this approach results in a substantial boost in ordering performance
with respect to previous methods.
| Ashutosh Modi and Ivan Titov | null | 1312.5198 | null | null |
Unsupervised feature learning by augmenting single images | cs.CV cs.LG cs.NE | When deep learning is applied to visual object recognition, data augmentation
is often used to generate additional training data without extra labeling cost.
It helps to reduce overfitting and increase the performance of the algorithm.
In this paper we investigate if it is possible to use data augmentation as the
main component of an unsupervised feature learning architecture. To that end we
sample a set of random image patches and declare each of them to be a separate
single-image surrogate class. We then extend these trivial one-element classes
by applying a variety of transformations to the initial 'seed' patches. Finally
we train a convolutional neural network to discriminate between these surrogate
classes. The feature representation learned by the network can then be used in
various vision tasks. We find that this simple feature learning algorithm is
surprisingly successful, achieving competitive classification results on
several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
| Alexey Dosovitskiy, Jost Tobias Springenberg and Thomas Brox | null | 1312.5242 | null | null |
On the Challenges of Physical Implementations of RBMs | stat.ML cs.LG | Restricted Boltzmann machines (RBMs) are powerful machine learning models,
but learning and some kinds of inference in the model require sampling-based
approximations, which, in classical digital computers, are implemented using
expensive MCMC. Physical computation offers the opportunity to reduce the cost
of sampling by building physical systems whose natural dynamics correspond to
drawing samples from the desired RBM distribution. Such a system avoids the
burn-in and mixing cost of a Markov chain. However, hardware implementations of
this variety usually entail limitations such as low-precision and limited range
of the parameters and restrictions on the size and topology of the RBM. We
conduct software simulations to determine how harmful each of these
restrictions is. Our simulations are designed to reproduce aspects of the
D-Wave quantum computer, but the issues we investigate arise in most forms of
physical computation.
| Vincent Dumoulin, Ian J. Goodfellow, Aaron Courville, Yoshua Bengio | null | 1312.5258 | null | null |
Classification of Human Ventricular Arrhythmia in High Dimensional
Representation Spaces | cs.CE cs.LG | We studied classification of human ECGs labelled as normal sinus rhythm,
ventricular fibrillation and ventricular tachycardia by means of support vector
machines in different representation spaces, using different observation
lengths. ECG waveform segments of duration 0.5-4 s, their Fourier magnitude
spectra, and lower dimensional projections of Fourier magnitude spectra were
used for classification. All considered representations were of much higher
dimension than in published studies. Classification accuracy improved with
segment duration up to 2 s, with 4 s providing little improvement. We found
that it is possible to discriminate between ventricular tachycardia and
ventricular fibrillation by the present approach with much shorter runs of ECG
(2 s, minimum 86% sensitivity per class) than previously imagined. Ensembles of
classifiers acting on 1 s segments taken over 5 s observation windows gave best
results, with sensitivities of detection for all classes exceeding 93%.
| Yaqub Alwan, Zoran Cvetkovic, Michael Curtis | null | 1312.5354 | null | null |
Missing Value Imputation With Unsupervised Backpropagation | cs.NE cs.LG stat.ML | Many data mining and data analysis techniques operate on dense matrices or
complete tables of data. Real-world data sets, however, often contain unknown
values. Even many classification algorithms that are designed to operate with
missing values still exhibit deteriorated accuracy. One approach to handling
missing values is to fill in (impute) the missing values. In this paper, we
present a technique for unsupervised learning called Unsupervised
Backpropagation (UBP), which trains a multi-layer perceptron to fit to the
manifold sampled by a set of observed point-vectors. We evaluate UBP with the
task of imputing missing values in datasets, and show that UBP is able to
predict missing values with significantly lower sum-squared error than other
collaborative filtering and imputation techniques. We also demonstrate with 24
datasets and 9 supervised learning algorithms that classification accuracy is
usually higher when randomly-withheld values are imputed using UBP, rather than
with other methods.
| Michael S. Gashler, Michael R. Smith, Richard Morris, Tony Martinez | null | 1312.5394 | null | null |
Continuous Learning: Engineering Super Features With Feature Algebras | cs.LG stat.ML | In this paper we consider a problem of searching a space of predictive models
for a given training data set. We propose an iterative procedure for deriving a
sequence of improving models and a corresponding sequence of sets of non-linear
features on the original input space. After a finite number of iterations N,
the non-linear features become 2^N -degree polynomials on the original space.
We show that in a limit of an infinite number of iterations derived non-linear
features must form an associative algebra: a product of two features is equal
to a linear combination of features from the same feature space for any given
input point. Because each iteration consists of solving a series of convex
problems that contain all previous solutions, the likelihood of the models in
the sequence is increasing with each iteration while the dimension of the model
parameter space is set to a limited controlled value.
| Michael Tetelman | null | 1312.5398 | null | null |
Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural
Images | stat.ML cs.LG | We pursue an early stopping technique that helps Gaussian Restricted
Boltzmann Machines (GRBMs) to gain good natural image representations in terms
of overcompleteness and data fitting. GRBMs are widely considered as an
unsuitable model for natural images because they gain non-overcomplete
representations which include uniform filters that do not represent useful
image features. We have recently found that GRBMs once gain and subsequently
lose useful filters during their training, contrary to this common perspective.
We attribute this phenomenon to a tradeoff between overcompleteness of GRBM
representations and data fitting. To gain GRBM representations that are
overcomplete and fit data well, we propose a measure for GRBM representation
quality, approximated mutual information, and an early stopping technique based
on this measure. The proposed method boosts performance of classifiers trained
on GRBM representations.
| Taichi Kiwaki, Takaki Makino, Kazuyuki Aihara | null | 1312.5412 | null | null |
Large-scale Multi-label Text Classification - Revisiting Neural Networks | cs.LG | Neural networks have recently been proposed for multi-label classification
because they are able to capture and model label dependencies in the output
layer. In this work, we investigate limitations of BP-MLL, a neural network
(NN) architecture that aims at minimizing pairwise ranking error. Instead, we
propose to use a comparably simple NN approach with recently proposed learning
techniques for large-scale multi-label text classification tasks. In
particular, we show that BP-MLL's ranking loss minimization can be efficiently
and effectively replaced with the commonly used cross entropy error function,
and demonstrate that several advances in neural network training that have been
developed in the realm of deep learning can be effectively employed in this
setting. Our experimental results show that simple NN models equipped with
advanced techniques such as rectified linear units, dropout, and AdaGrad
perform as well as or even outperform state-of-the-art approaches on six
large-scale textual datasets with diverse characteristics.
| Jinseok Nam, Jungi Kim, Eneldo Loza Menc\'ia, Iryna Gurevych, Johannes
F\"urnkranz | 10.1007/978-3-662-44851-9_28 | 1312.5419 | null | null |
Asynchronous Adaptation and Learning over Networks --- Part I: Modeling
and Stability Analysis | cs.SY cs.IT cs.LG math.IT math.OC | In this work and the supporting Parts II [2] and III [3], we provide a rather
detailed analysis of the stability and performance of asynchronous strategies
for solving distributed optimization and adaptation problems over networks. We
examine asynchronous networks that are subject to fairly general sources of
uncertainties, such as changing topologies, random link failures, random data
arrival times, and agents turning on and off randomly. Under this model, agents
in the network may stop updating their solutions or may stop sending or
receiving information in a random manner and without coordination with other
agents. We establish in Part I conditions on the first and second-order moments
of the relevant parameter distributions to ensure mean-square stable behavior.
We derive in Part II expressions that reveal how the various parameters of the
asynchronous behavior influence network performance. We compare in Part III the
performance of asynchronous networks to the performance of both centralized
solutions and synchronous networks. One notable conclusion is that the
mean-square-error performance of asynchronous networks shows a degradation only
of the order of $O(\nu)$, where $\nu$ is a small step-size parameter, while the
convergence rate remains largely unaltered. The results provide a solid
justification for the remarkable resilience of cooperative networks in the face
of random failures at multiple levels: agents, links, data arrivals, and
topology.
| Xiaochuan Zhao and Ali H. Sayed | null | 1312.5434 | null | null |
Asynchronous Adaptation and Learning over Networks - Part II:
Performance Analysis | cs.SY cs.IT cs.LG math.IT math.OC | In Part I \cite{Zhao13TSPasync1}, we introduced a fairly general model for
asynchronous events over adaptive networks including random topologies, random
link failures, random data arrival times, and agents turning on and off
randomly. We performed a stability analysis and established the notable fact
that the network is still able to converge in the mean-square-error sense to
the desired solution. Once stable behavior is guaranteed, it becomes important
to evaluate how fast the iterates converge and how close they get to the
optimal solution. This is a demanding task due to the various asynchronous
events and due to the fact that agents influence each other. In this Part II,
we carry out a detailed analysis of the mean-square-error performance of
asynchronous strategies for solving distributed optimization and adaptation
problems over networks. We derive analytical expressions for the mean-square
convergence rate and the steady-state mean-square-deviation. The expressions
reveal how the various parameters of the asynchronous behavior influence
network performance. In the process, we establish the interesting conclusion
that even under the influence of asynchronous events, all agents in the
adaptive network can still reach an $O(\nu^{1 + \gamma_o'})$ near-agreement
with some $\gamma_o' > 0$ while approaching the desired solution within
$O(\nu)$ accuracy, where $\nu$ is proportional to the small step-size parameter
for adaptation.
| Xiaochuan Zhao and Ali H. Sayed | null | 1312.5438 | null | null |
Asynchronous Adaptation and Learning over Networks - Part III:
Comparison Analysis | cs.SY cs.IT cs.LG math.IT math.OC | In Part II [3] we carried out a detailed mean-square-error analysis of the
performance of asynchronous adaptation and learning over networks under a
fairly general model for asynchronous events including random topologies,
random link failures, random data arrival times, and agents turning on and off
randomly. In this Part III, we compare the performance of synchronous and
asynchronous networks. We also compare the performance of decentralized
adaptation against centralized stochastic-gradient (batch) solutions. Two
interesting conclusions stand out. First, the results establish that the
performance of adaptive networks is largely immune to the effect of
asynchronous events: the mean and mean-square convergence rates and the
asymptotic bias values are not degraded relative to synchronous or centralized
implementations. Only the steady-state mean-square-deviation suffers a
degradation in the order of $\nu$, which represents the small step-size
parameters used for adaptation. Second, the results show that the adaptive
distributed network matches the performance of the centralized solution. These
conclusions highlight another critical benefit of cooperation by networked
agents: cooperation does not only enhance performance in comparison to
stand-alone single-agent processing, but it also endows the network with
remarkable resilience to various forms of random failure events and is able to
deliver performance that is as powerful as batch solutions.
| Xiaochuan Zhao and Ali H. Sayed | null | 1312.5439 | null | null |
Codebook based Audio Feature Representation for Music Information
Retrieval | cs.IR cs.LG cs.MM | Digital music has become prolific in the web in recent decades. Automated
recommendation systems are essential for users to discover music they love and
for artists to reach appropriate audience. When manual annotations and user
preference data is lacking (e.g. for new artists) these systems must rely on
\emph{content based} methods. Besides powerful machine learning tools for
classification and retrieval, a key component for successful recommendation is
the \emph{audio content representation}.
Good representations should capture informative musical patterns in the audio
signal of songs. These representations should be concise, to enable efficient
(low storage, easy indexing, fast search) management of huge music
repositories, and should also be easy and fast to compute, to enable real-time
interaction with a user supplying new songs to the system.
Before designing new audio features, we explore the usage of traditional
local features, while adding a stage of encoding with a pre-computed
\emph{codebook} and a stage of pooling to get compact vectorial
representations. We experiment with different encoding methods, namely
\emph{the LASSO}, \emph{vector quantization (VQ)} and \emph{cosine similarity
(CS)}. We evaluate the representations' quality in two music information
retrieval applications: query-by-tag and query-by-example. Our results show
that concise representations can be used for successful performance in both
applications. We recommend using top-$\tau$ VQ encoding, which consistently
performs well in both applications, and requires much less computation time
than the LASSO.
| Yonatan Vaizman, Brian McFee and Gert Lanckriet | 10.1109/TASLP.2014.2337842 | 1312.5457 | null | null |
Learning rates of $l^q$ coefficient regularization learning with
Gaussian kernel | cs.LG stat.ML | Regularization is a well recognized powerful strategy to improve the
performance of a learning machine and $l^q$ regularization schemes with
$0<q<\infty$ are central in use. It is known that different $q$ leads to
different properties of the deduced estimators, say, $l^2$ regularization leads
to smooth estimators while $l^1$ regularization leads to sparse estimators.
Then, how does the generalization capabilities of $l^q$ regularization learning
vary with $q$? In this paper, we study this problem in the framework of
statistical learning theory and show that implementing $l^q$ coefficient
regularization schemes in the sample dependent hypothesis space associated with
Gaussian kernel can attain the same almost optimal learning rates for all
$0<q<\infty$. That is, the upper and lower bounds of learning rates for $l^q$
regularization learning are asymptotically identical for all $0<q<\infty$. Our
finding tentatively reveals that, in some modeling contexts, the choice of $q$
might not have a strong impact with respect to the generalization capability.
From this perspective, $q$ can be arbitrarily specified, or specified merely by
other no generalization criteria like smoothness, computational complexity,
sparsity, etc..
| Shaobo Lin, Jinshan Zeng, Jian Fang and Zongben Xu | null | 1312.5465 | null | null |
Word Emdeddings through Hellinger PCA | cs.CL cs.LG | Word embeddings resulting from neural language models have been shown to be
successful for a large variety of NLP tasks. However, such architecture might
be difficult to train and time-consuming. Instead, we propose to drastically
simplify the word embeddings computation through a Hellinger PCA of the word
co-occurence matrix. We compare those new word embeddings with some well-known
embeddings on NER and movie review tasks and show that we can reach similar or
even better performance. Although deep learning is not really necessary for
generating good word embeddings, we show that it can provide an easy way to
adapt embeddings to specific tasks.
| R\'emi Lebret and Ronan Collobert | null | 1312.5542 | null | null |
Multimodal Transitions for Generative Stochastic Networks | cs.LG stat.ML | Generative Stochastic Networks (GSNs) have been recently introduced as an
alternative to traditional probabilistic modeling: instead of parametrizing the
data distribution directly, one parametrizes a transition operator for a Markov
chain whose stationary distribution is an estimator of the data generating
distribution. The result of training is therefore a machine that generates
samples through this Markov chain. However, the previously introduced GSN
consistency theorems suggest that in order to capture a wide class of
distributions, the transition operator in general should be multimodal,
something that has not been done before this paper. We introduce for the first
time multimodal transition distributions for GSNs, in particular using models
in the NADE family (Neural Autoregressive Density Estimator) as output
distributions of the transition operator. A NADE model is related to an RBM
(and can thus model multimodal distributions) but its likelihood (and
likelihood gradient) can be computed easily. The parameters of the NADE are
obtained as a learned function of the previous state of the learned Markov
chain. Experiments clearly illustrate the advantage of such multimodal
transition distributions over unimodal GSNs.
| Sherjil Ozair, Li Yao and Yoshua Bengio | null | 1312.5578 | null | null |
Playing Atari with Deep Reinforcement Learning | cs.LG | We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.
| Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis
Antonoglou, Daan Wierstra, Martin Riedmiller | null | 1312.5602 | null | null |
Learning Transformations for Classification Forests | cs.CV cs.LG stat.ML | This work introduces a transformation-based learner model for classification
forests. The weak learner at each split node plays a crucial role in a
classification tree. We propose to optimize the splitting objective by learning
a linear transformation on subspaces using nuclear norm as the optimization
criteria. The learned linear transformation restores a low-rank structure for
data from the same class, and, at the same time, maximizes the separation
between different classes, thereby improving the performance of the split
function. Theoretical and experimental results support the proposed framework.
| Qiang Qiu, Guillermo Sapiro | null | 1312.5604 | null | null |
Zero-Shot Learning by Convex Combination of Semantic Embeddings | cs.LG | Several recent publications have proposed methods for mapping images into
continuous semantic embedding spaces. In some cases the embedding space is
trained jointly with the image transformation. In other cases the semantic
embedding space is established by an independent natural language processing
task, and then the image transformation into that space is learned in a second
stage. Proponents of these image embedding systems have stressed their
advantages over the traditional \nway{} classification framing of image
understanding, particularly in terms of the promise for zero-shot learning --
the ability to correctly annotate images of previously unseen object
categories. In this paper, we propose a simple method for constructing an image
embedding system from any existing \nway{} image classifier and a semantic word
embedding model, which contains the $\n$ class labels in its vocabulary. Our
method maps images into the semantic embedding space via convex combination of
the class label embedding vectors, and requires no additional training. We show
that this simple and direct method confers many of the advantages associated
with more complex image embedding schemes, and indeed outperforms state of the
art methods on the ImageNet zero-shot learning task.
| Mohammad Norouzi and Tomas Mikolov and Samy Bengio and Yoram Singer
and Jonathon Shlens and Andrea Frome and Greg S. Corrado and Jeffrey Dean | null | 1312.5650 | null | null |
k-Sparse Autoencoders | cs.LG | Recently, it has been observed that when representations are learnt in a way
that encourages sparsity, improved performance is obtained on classification
tasks. These methods involve combinations of activation functions, sampling
steps and different kinds of penalties. To investigate the effectiveness of
sparsity by itself, we propose the k-sparse autoencoder, which is an
autoencoder with linear activation function, where in hidden layers only the k
highest activities are kept. When applied to the MNIST and NORB datasets, we
find that this method achieves better classification results than denoising
autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders
are simple to train and the encoding stage is very fast, making them
well-suited to large problem sizes, where conventional sparse coding algorithms
cannot be applied.
| Alireza Makhzani, Brendan Frey | null | 1312.5663 | null | null |
Using Web Co-occurrence Statistics for Improving Image Categorization | cs.CV cs.LG | Object recognition and localization are important tasks in computer vision.
The focus of this work is the incorporation of contextual information in order
to improve object recognition and localization. For instance, it is natural to
expect not to see an elephant to appear in the middle of an ocean. We consider
a simple approach to encapsulate such common sense knowledge using
co-occurrence statistics from web documents. By merely counting the number of
times nouns (such as elephants, sharks, oceans, etc.) co-occur in web
documents, we obtain a good estimate of expected co-occurrences in visual data.
We then cast the problem of combining textual co-occurrence statistics with the
predictions of image-based classifiers as an optimization problem. The
resulting optimization problem serves as a surrogate for our inference
procedure. Albeit the simplicity of the resulting optimization problem, it is
effective in improving both recognition and localization accuracy. Concretely,
we observe significant improvements in recognition and localization rates for
both ImageNet Detection 2012 and Sun 2012 datasets.
| Samy Bengio, Jeff Dean, Dumitru Erhan, Eugene Ie, Quoc Le, Andrew
Rabinovich, Jonathon Shlens, Yoram Singer | null | 1312.5697 | null | null |
Time-varying Learning and Content Analytics via Sparse Factor Analysis | stat.ML cs.LG math.OC stat.AP | We propose SPARFA-Trace, a new machine learning-based framework for
time-varying learning and content analytics for education applications. We
develop a novel message passing-based, blind, approximate Kalman filter for
sparse factor analysis (SPARFA), that jointly (i) traces learner concept
knowledge over time, (ii) analyzes learner concept knowledge state transitions
(induced by interacting with learning resources, such as textbook sections,
lecture videos, etc, or the forgetting effect), and (iii) estimates the content
organization and intrinsic difficulty of the assessment questions. These
quantities are estimated solely from binary-valued (correct/incorrect) graded
learner response data and a summary of the specific actions each learner
performs (e.g., answering a question or studying a learning resource) at each
time instance. Experimental results on two online course datasets demonstrate
that SPARFA-Trace is capable of tracing each learner's concept knowledge
evolution over time, as well as analyzing the quality and content organization
of learning resources, the question-concept associations, and the question
intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable
or better performance in predicting unobserved learner responses than existing
collaborative filtering and knowledge tracing approaches for personalized
education.
| Andrew S. Lan, Christoph Studer and Richard G. Baraniuk | null | 1312.5734 | null | null |
SOMz: photometric redshift PDFs with self organizing maps and random
atlas | astro-ph.IM astro-ph.CO cs.LG stat.ML | In this paper we explore the applicability of the unsupervised machine
learning technique of Self Organizing Maps (SOM) to estimate galaxy photometric
redshift probability density functions (PDFs). This technique takes a
spectroscopic training set, and maps the photometric attributes, but not the
redshifts, to a two dimensional surface by using a process of competitive
learning where neurons compete to more closely resemble the training data
multidimensional space. The key feature of a SOM is that it retains the
topology of the input set, revealing correlations between the attributes that
are not easily identified. We test three different 2D topological mapping:
rectangular, hexagonal, and spherical, by using data from the DEEP2 survey. We
also explore different implementations and boundary conditions on the map and
also introduce the idea of a random atlas where a large number of different
maps are created and their individual predictions are aggregated to produce a
more robust photometric redshift PDF. We also introduced a new metric, the
$I$-score, which efficiently incorporates different metrics, making it easier
to compare different results (from different parameters or different
photometric redshift codes). We find that by using a spherical topology mapping
we obtain a better representation of the underlying multidimensional topology,
which provides more accurate results that are comparable to other,
state-of-the-art machine learning algorithms. Our results illustrate that
unsupervised approaches have great potential for many astronomical problems,
and in particular for the computation of photometric redshifts.
| M. Carrasco Kind and R. J. Brunner (Department of Astronomy,
University of Illinois at Urbana-Champaign) | 10.1093/mnras/stt2456 | 1312.5753 | null | null |
Structure-Aware Dynamic Scheduler for Parallel Machine Learning | stat.ML cs.LG | Training large machine learning (ML) models with many variables or parameters
can take a long time if one employs sequential procedures even with stochastic
updates. A natural solution is to turn to distributed computing on a cluster;
however, naive, unstructured parallelization of ML algorithms does not usually
lead to a proportional speedup and can even result in divergence, because
dependencies between model elements can attenuate the computational gains from
parallelization and compromise correctness of inference. Recent efforts toward
this issue have benefited from exploiting the static, a priori block structures
residing in ML algorithms. In this paper, we take this path further by
exploring the dynamic block structures and workloads therein present during ML
program execution, which offers new opportunities for improving convergence,
correctness, and load balancing in distributed ML. We propose and showcase a
general-purpose scheduler, STRADS, for coordinating distributed updates in ML
algorithms, which harnesses the aforementioned opportunities in a systematic
way. We provide theoretical guarantees for our scheduler, and demonstrate its
efficacy versus static block structures on Lasso and Matrix Factorization.
| Seunghak Lee, Jin Kyu Kim, Qirong Ho, Garth A. Gibson, Eric P. Xing | null | 1312.5766 | null | null |
Consistency of Causal Inference under the Additive Noise Model | cs.LG stat.ML | We analyze a family of methods for statistical causal inference from sample
under the so-called Additive Noise Model. While most work on the subject has
concentrated on establishing the soundness of the Additive Noise Model, the
statistical consistency of the resulting inference methods has received little
attention. We derive general conditions under which the given family of
inference methods consistently infers the causal direction in a nonparametric
setting.
| Samory Kpotufe, Eleni Sgouritsa, Dominik Janzing, and Bernhard
Sch\"olkopf | null | 1312.5770 | null | null |
Unsupervised Feature Learning by Deep Sparse Coding | cs.LG cs.CV cs.NE | In this paper, we propose a new unsupervised feature learning framework,
namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer
architecture for visual object recognition tasks. The main innovation of the
framework is that it connects the sparse-encoders from different layers by a
sparse-to-dense module. The sparse-to-dense module is a composition of a local
spatial pooling step and a low-dimensional embedding process, which takes
advantage of the spatial smoothness information in the image. As a result, the
new method is able to learn several levels of sparse representation of the
image which capture features at a variety of abstraction levels and
simultaneously preserve the spatial smoothness between the neighboring image
patches. Combining the feature representations from multiple layers, DeepSC
achieves the state-of-the-art performance on multiple object recognition tasks.
| Yunlong He, Koray Kavukcuoglu, Yun Wang, Arthur Szlam, Yanjun Qi | null | 1312.5783 | null | null |
Unsupervised Pretraining Encourages Moderate-Sparseness | cs.LG cs.NE | It is well known that direct training of deep neural networks will generally
lead to poor results. A major progress in recent years is the invention of
various pretraining methods to initialize network parameters and it was shown
that such methods lead to good prediction performance. However, the reason for
the success of pretraining has not been fully understood, although it was
argued that regularization and better optimization play certain roles. This
paper provides another explanation for the effectiveness of pretraining, where
we show pretraining leads to a sparseness of hidden unit activation in the
resulting neural networks. The main reason is that the pretraining models can
be interpreted as an adaptive sparse coding. Compared to deep neural network
with sigmoid function, our experimental results on MNIST and Birdsong further
support this sparseness observation.
| Jun Li, Wei Luo, Jian Yang, Xiaotong Yuan | null | 1312.5813 | null | null |
Competitive Learning with Feedforward Supervisory Signal for Pre-trained
Multilayered Networks | cs.NE cs.CV cs.LG stat.ML | We propose a novel learning method for multilayered neural networks which
uses feedforward supervisory signal and associates classification of a new
input with that of pre-trained input. The proposed method effectively uses rich
input information in the earlier layer for robust leaning and revising internal
representation in a multilayer neural network.
| Takashi Shinozaki and Yasushi Naruse | null | 1312.5845 | null | null |
Deep learning for neuroimaging: a validation study | cs.NE cs.LG stat.ML | Deep learning methods have recently made notable advances in the tasks of
classification and representation learning. These tasks are important for brain
imaging and neuroscience discovery, making the methods attractive for porting
to a neuroimager's toolbox. Success of these methods is, in part, explained by
the flexibility of deep learning models. However, this flexibility makes the
process of porting to new areas a difficult parameter optimization problem. In
this work we demonstrate our results (and feasible parameter ranges) in
application of deep learning methods to structural and functional brain imaging
data. We also describe a novel constraint-based approach to visualizing high
dimensional data. We use it to analyze the effect of parameter choices on data
transformations. Our results show that deep learning methods are able to learn
physiologically important representations and detect latent relations in
neuroimaging data.
| Sergey M. Plis and Devon R. Hjelm and Ruslan Salakhutdinov and Vince
D. Calhoun | null | 1312.5847 | null | null |
Fast Training of Convolutional Networks through FFTs | cs.CV cs.LG cs.NE | Convolutional networks are one of the most widely employed architectures in
computer vision and machine learning. In order to leverage their ability to
learn complex functions, large amounts of data are required for training.
Training a large convolutional network to produce state-of-the-art results can
take weeks, even when using modern GPUs. Producing labels using a trained
network can also be costly when dealing with web-scale datasets. In this work,
we present a simple algorithm which accelerates training and inference by a
significant factor, and can yield improvements of over an order of magnitude
compared to existing state-of-the-art implementations. This is done by
computing convolutions as pointwise products in the Fourier domain while
reusing the same transformed feature map many times. The algorithm is
implemented on a GPU architecture and addresses a number of related challenges.
| Michael Mathieu, Mikael Henaff, Yann LeCun | null | 1312.5851 | null | null |
Multi-GPU Training of ConvNets | cs.LG cs.NE | In this work we evaluate different approaches to parallelize computation of
convolutional neural networks across several GPUs.
| Omry Yadan, Keith Adams, Yaniv Taigman, Marc'Aurelio Ranzato | null | 1312.5853 | null | null |
A Generative Product-of-Filters Model of Audio | stat.ML cs.LG | We propose the product-of-filters (PoF) model, a generative model that
decomposes audio spectra as sparse linear combinations of "filters" in the
log-spectral domain. PoF makes similar assumptions to those used in the classic
homomorphic filtering approach to signal processing, but replaces hand-designed
decompositions built of basic signal processing operations with a learned
decomposition based on statistical inference. This paper formulates the PoF
model and derives a mean-field method for posterior inference and a variational
EM algorithm to estimate the model's free parameters. We demonstrate PoF's
potential for audio processing on a bandwidth expansion task, and show that PoF
can serve as an effective unsupervised feature extractor for a speaker
identification task.
| Dawen Liang, Matthew D. Hoffman, Gautham J. Mysore | null | 1312.5857 | null | null |
Principled Non-Linear Feature Selection | cs.LG | Recent non-linear feature selection approaches employing greedy optimisation
of Centred Kernel Target Alignment(KTA) exhibit strong results in terms of
generalisation accuracy and sparsity. However, they are computationally
prohibitive for large datasets. We propose randSel, a randomised feature
selection algorithm, with attractive scaling properties. Our theoretical
analysis of randSel provides strong probabilistic guarantees for correct
identification of relevant features. RandSel's characteristics make it an ideal
candidate for identifying informative learned representations. We've conducted
experimentation to establish the performance of this approach, and present
encouraging results, including a 3rd position result in the recent ICML black
box learning challenge as well as competitive results for signal peptide
prediction, an important problem in bioinformatics.
| Dimitrios Athanasakis, John Shawe-Taylor, Delmiro Fernandez-Reyes | null | 1312.5869 | null | null |
Group-sparse Embeddings in Collective Matrix Factorization | stat.ML cs.LG | CMF is a technique for simultaneously learning low-rank representations based
on a collection of matrices with shared entities. A typical example is the
joint modeling of user-item, item-property, and user-feature matrices in a
recommender system. The key idea in CMF is that the embeddings are shared
across the matrices, which enables transferring information between them. The
existing solutions, however, break down when the individual matrices have
low-rank structure not shared with others. In this work we present a novel CMF
solution that allows each of the matrices to have a separate low-rank structure
that is independent of the other matrices, as well as structures that are
shared only by a subset of them. We compare MAP and variational Bayesian
solutions based on alternating optimization algorithms and show that the model
automatically infers the nature of each factor using group-wise sparsity. Our
approach supports in a principled way continuous, binary and count observations
and is efficient for sparse matrices involving missing data. We illustrate the
solution on a number of examples, focusing in particular on an interesting
use-case of augmented multi-view learning.
| Arto Klami, Guillaume Bouchard and Abhishek Tripathi | null | 1312.5921 | null | null |
Adaptive Seeding for Gaussian Mixture Models | cs.LG | We present new initialization methods for the expectation-maximization
algorithm for multivariate Gaussian mixture models. Our methods are adaptions
of the well-known $K$-means++ initialization and the Gonzalez algorithm.
Thereby we aim to close the gap between simple random, e.g. uniform, and
complex methods, that crucially depend on the right choice of hyperparameters.
Our extensive experiments indicate the usefulness of our methods compared to
common techniques and methods, which e.g. apply the original $K$-means++ and
Gonzalez directly, with respect to artificial as well as real-world data sets.
| Johannes Bl\"omer and Kathrin Bujna | 10.1007/978-3-319-31750-2_24 | 1312.5946 | null | null |
Learning Type-Driven Tensor-Based Meaning Representations | cs.CL cs.LG | This paper investigates the learning of 3rd-order tensors representing the
semantics of transitive verbs. The meaning representations are part of a
type-driven tensor-based semantic framework, from the newly emerging field of
compositional distributional semantics. Standard techniques from the neural
networks literature are used to learn the tensors, which are tested on a
selectional preference-style task with a simple 2-dimensional sentence space.
Promising results are obtained against a competitive corpus-based baseline. We
argue that extending this work beyond transitive verbs, and to
higher-dimensional sentence spaces, is an interesting and challenging problem
for the machine learning community to consider.
| Tamara Polajnar and Luana Fagarasan and Stephen Clark | null | 1312.5985 | null | null |
Stochastic Gradient Estimate Variance in Contrastive Divergence and
Persistent Contrastive Divergence | cs.NE cs.LG stat.ML | Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are
popular methods for training the weights of Restricted Boltzmann Machines.
However, both methods use an approximate method for sampling from the model
distribution. As a side effect, these approximations yield significantly
different biases and variances for stochastic gradient estimates of individual
data points. It is well known that CD yields a biased gradient estimate. In
this paper we however show empirically that CD has a lower stochastic gradient
estimate variance than exact sampling, while the mean of subsequent PCD
estimates has a higher variance than exact sampling. The results give one
explanation to the finding that CD can be used with smaller minibatches or
higher learning rates than PCD.
| Mathias Berglund, Tapani Raiko | null | 1312.6002 | null | null |
How to Construct Deep Recurrent Neural Networks | cs.NE cs.LG stat.ML | In this paper, we explore different ways to extend a recurrent neural network
(RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in
an RNN is not as clear as it is in feedforward neural networks. By carefully
analyzing and understanding the architecture of an RNN, however, we find three
points of an RNN which may be made deeper; (1) input-to-hidden function, (2)
hidden-to-hidden transition and (3) hidden-to-output function. Based on this
observation, we propose two novel architectures of a deep RNN which are
orthogonal to an earlier attempt of stacking multiple recurrent layers to build
a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an
alternative interpretation of these deep RNNs using a novel framework based on
neural operators. The proposed deep RNNs are empirically evaluated on the tasks
of polyphonic music prediction and language modeling. The experimental result
supports our claim that the proposed deep RNNs benefit from the depth and
outperform the conventional, shallow RNNs.
| Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio | null | 1312.6026 | null | null |
Learning States Representations in POMDP | cs.LG | We propose to deal with sequential processes where only partial observations
are available by learning a latent representation space on which policies may
be accurately learned.
| Gabriella Contardo and Ludovic Denoyer and Thierry Artieres and
Patrick Gallinari | null | 1312.6042 | null | null |
Unit Tests for Stochastic Optimization | cs.LG | Optimization by stochastic gradient descent is an important component of many
large-scale machine learning algorithms. A wide variety of such optimization
algorithms have been devised; however, it is unclear whether these algorithms
are robust and widely applicable across many different optimization landscapes.
In this paper we develop a collection of unit tests for stochastic
optimization. Each unit test rapidly evaluates an optimization algorithm on a
small-scale, isolated, and well-understood difficulty, rather than in
real-world scenarios where many such issues are entangled. Passing these unit
tests is not sufficient, but absolutely necessary for any algorithms with
claims to generality or robustness. We give initial quantitative and
qualitative results on numerous established algorithms. The testing framework
is open-source, extensible, and easy to apply to new algorithms.
| Tom Schaul, Ioannis Antonoglou, David Silver | null | 1312.6055 | null | null |
Stopping Criteria in Contrastive Divergence: Alternatives to the
Reconstruction Error | cs.LG | Restricted Boltzmann Machines (RBMs) are general unsupervised learning
devices to ascertain generative models of data distributions. RBMs are often
trained using the Contrastive Divergence learning algorithm (CD), an
approximation to the gradient of the data log-likelihood. A simple
reconstruction error is often used to decide whether the approximation provided
by the CD algorithm is good enough, though several authors (Schulz et al.,
2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of
this procedure. However, not many alternatives to the reconstruction error have
been used in the literature. In this manuscript we investigate simple
alternatives to the reconstruction error in order to detect as soon as possible
the decrease in the log-likelihood during learning.
| David Buchaca, Enrique Romero, Ferran Mazzanti, Jordi Delgado | null | 1312.6062 | null | null |
The return of AdaBoost.MH: multi-class Hamming trees | cs.LG | Within the framework of AdaBoost.MH, we propose to train vector-valued
decision trees to optimize the multi-class edge without reducing the
multi-class problem to $K$ binary one-against-all classifications. The key
element of the method is a vector-valued decision stump, factorized into an
input-independent vector of length $K$ and label-independent scalar classifier.
At inner tree nodes, the label-dependent vector is discarded and the binary
classifier can be used for partitioning the input space into two regions. The
algorithm retains the conceptual elegance, power, and computational efficiency
of binary AdaBoost. In experiments it is on par with support vector machines
and with the best existing multi-class boosting algorithm AOSOLogitBoost, and
it is significantly better than other known implementations of AdaBoost.MH.
| Bal\'azs K\'egl | null | 1312.6086 | null | null |
On the number of response regions of deep feed forward networks with
piece-wise linear activations | cs.LG cs.NE | This paper explores the complexity of deep feedforward networks with linear
pre-synaptic couplings and rectified linear activations. This is a contribution
to the growing body of work contrasting the representational power of deep and
shallow network architectures. In particular, we offer a framework for
comparing deep and shallow models that belong to the family of piecewise linear
functions based on computational geometry. We look at a deep rectifier
multi-layer perceptron (MLP) with linear outputs units and compare it with a
single layer version of the model. In the asymptotic regime, when the number of
inputs stays constant, if the shallow model has $kn$ hidden units and $n_0$
inputs, then the number of linear regions is $O(k^{n_0}n^{n_0})$. For a $k$
layer model with $n$ hidden units on each layer it is $\Omega(\left\lfloor
{n}/{n_0}\right\rfloor^{k-1}n^{n_0})$. The number
$\left\lfloor{n}/{n_0}\right\rfloor^{k-1}$ grows faster than $k^{n_0}$ when $n$
tends to infinity or when $k$ tends to infinity and $n \geq 2n_0$.
Additionally, even when $k$ is small, if we restrict $n$ to be $2n_0$, we can
show that a deep model has considerably more linear regions that a shallow one.
We consider this as a first step towards understanding the complexity of these
models and specifically towards providing suitable mathematical tools for
future analysis.
| Razvan Pascanu and Guido Montufar and Yoshua Bengio | null | 1312.6098 | null | null |
Modeling correlations in spontaneous activity of visual cortex with
centered Gaussian-binary deep Boltzmann machines | cs.NE cs.LG q-bio.NC | Spontaneous cortical activity -- the ongoing cortical activities in absence
of intentional sensory input -- is considered to play a vital role in many
aspects of both normal brain functions and mental dysfunctions. We present a
centered Gaussian-binary Deep Boltzmann Machine (GDBM) for modeling the
activity in early cortical visual areas and relate the random sampling in GDBMs
to the spontaneous cortical activity. After training the proposed model on
natural image patches, we show that the samples collected from the model's
probability distribution encompass similar activity patterns as found in the
spontaneous activity. Specifically, filters having the same orientation
preference tend to be active together during random sampling. Our work
demonstrates the centered GDBM is a meaningful model approach for basic
receptive field properties and the emergence of spontaneous activity patterns
in early cortical visual areas. Besides, we show empirically that centered
GDBMs do not suffer from the difficulties during training as GDBMs do and can
be properly trained without the layer-wise pretraining.
| Nan Wang, Dirk Jancke, Laurenz Wiskott | null | 1312.6108 | null | null |
Auto-Encoding Variational Bayes | stat.ML cs.LG | How can we perform efficient inference and learning in directed probabilistic
models, in the presence of continuous latent variables with intractable
posterior distributions, and large datasets? We introduce a stochastic
variational inference and learning algorithm that scales to large datasets and,
under some mild differentiability conditions, even works in the intractable
case. Our contributions are two-fold. First, we show that a reparameterization
of the variational lower bound yields a lower bound estimator that can be
straightforwardly optimized using standard stochastic gradient methods. Second,
we show that for i.i.d. datasets with continuous latent variables per
datapoint, posterior inference can be made especially efficient by fitting an
approximate inference model (also called a recognition model) to the
intractable posterior using the proposed lower bound estimator. Theoretical
advantages are reflected in experimental results.
| Diederik P Kingma, Max Welling | null | 1312.6114 | null | null |
Neuronal Synchrony in Complex-Valued Deep Networks | stat.ML cs.LG cs.NE q-bio.NC | Deep learning has recently led to great successes in tasks such as image
recognition (e.g Krizhevsky et al., 2012). However, deep networks are still
outmatched by the power and versatility of the brain, perhaps in part due to
the richer neuronal computations available to cortical circuits. The challenge
is to identify which neuronal mechanisms are relevant, and to find suitable
abstractions to model them. Here, we show how aspects of spike timing, long
hypothesized to play a crucial role in cortical information processing, could
be incorporated into deep networks to build richer, versatile representations.
We introduce a neural network formulation based on complex-valued neuronal
units that is not only biologically meaningful but also amenable to a variety
of deep learning frameworks. Here, units are attributed both a firing rate and
a phase, the latter indicating properties of spike timing. We show how this
formulation qualitatively captures several aspects thought to be related to
neuronal synchrony, including gating of information processing and dynamic
binding of distributed object representations. Focusing on the latter, we
demonstrate the potential of the approach in several simple experiments. Thus,
neuronal synchrony could be a flexible mechanism that fulfills multiple
functional roles in deep networks.
| David P. Reichert, Thomas Serre | null | 1312.6115 | null | null |
Improving Deep Neural Networks with Probabilistic Maxout Units | stat.ML cs.LG cs.NE | We present a probabilistic variant of the recently introduced maxout unit.
The success of deep neural networks utilizing maxout can partly be attributed
to favorable performance under dropout, when compared to rectified linear
units. It however also depends on the fact that each maxout unit performs a
pooling operation over a group of linear transformations and is thus partially
invariant to changes in its input. Starting from this observation we ask the
question: Can the desirable properties of maxout units be preserved while
improving their invariance properties ? We argue that our probabilistic maxout
(probout) units successfully achieve this balance. We quantitatively verify
this claim and report classification performance matching or exceeding the
current state of the art on three challenging image classification benchmarks
(CIFAR-10, CIFAR-100 and SVHN).
| Jost Tobias Springenberg, Martin Riedmiller | null | 1312.6116 | null | null |
Comparison three methods of clustering: k-means, spectral clustering and
hierarchical clustering | cs.LG | Comparison of three kind of the clustering and find cost function and loss
function and calculate them. Error rate of the clustering methods and how to
calculate the error percentage always be one on the important factor for
evaluating the clustering methods, so this paper introduce one way to calculate
the error rate of clustering methods. Clustering algorithms can be divided into
several categories including partitioning clustering algorithms, hierarchical
algorithms and density based algorithms. Generally speaking we should compare
clustering algorithms by Scalability, Ability to work with different attribute,
Clusters formed by conventional, Having minimal knowledge of the computer to
recognize the input parameters, Classes for dealing with noise and extra
deposition that same error rate for clustering a new data, Thus, there is no
effect on the input data, different dimensions of high levels, K-means is one
of the simplest approach to clustering that clustering is an unsupervised
problem.
| Kamran Kowsari | null | 1312.6117 | null | null |
Exact solutions to the nonlinear dynamics of learning in deep linear
neural networks | cs.NE cond-mat.dis-nn cs.CV cs.LG q-bio.NC stat.ML | Despite the widespread practical success of deep learning methods, our
theoretical understanding of the dynamics of learning in deep neural networks
remains quite sparse. We attempt to bridge the gap between the theory and
practice of deep learning by systematically analyzing learning dynamics for the
restricted case of deep linear neural networks. Despite the linearity of their
input-output map, such networks have nonlinear gradient descent dynamics on
weights that change with the addition of each new hidden layer. We show that
deep linear networks exhibit nonlinear learning phenomena similar to those seen
in simulations of nonlinear networks, including long plateaus followed by rapid
transitions to lower error solutions, and faster convergence from greedy
unsupervised pretraining initial conditions than from random initial
conditions. We provide an analytical description of these phenomena by finding
new exact solutions to the nonlinear dynamics of deep learning. Our theoretical
analysis also reveals the surprising finding that as the depth of a network
approaches infinity, learning speed can nevertheless remain finite: for a
special class of initial conditions on the weights, very deep networks incur
only a finite, depth independent, delay in learning speed relative to shallow
networks. We show that, under certain conditions on the training data,
unsupervised pretraining can find this special class of initial conditions,
while scaled random Gaussian initializations cannot. We further exhibit a new
class of random orthogonal initial conditions on weights that, like
unsupervised pre-training, enjoys depth independent learning times. We further
show that these initial conditions also lead to faithful propagation of
gradients even in deep nonlinear networks, as long as they operate in a special
regime known as the edge of chaos.
| Andrew M. Saxe, James L. McClelland, Surya Ganguli | null | 1312.6120 | null | null |
Distinction between features extracted using deep belief networks | cs.LG cs.NE | Data representation is an important pre-processing step in many machine
learning algorithms. There are a number of methods used for this task such as
Deep Belief Networks (DBNs) and Discrete Fourier Transforms (DFTs). Since some
of the features extracted using automated feature extraction methods may not
always be related to a specific machine learning task, in this paper we propose
two methods in order to make a distinction between extracted features based on
their relevancy to the task. We applied these two methods to a Deep Belief
Network trained for a face recognition task.
| Mohammad Pezeshki, Sajjad Gholami, Ahmad Nickabadi | null | 1312.6157 | null | null |
Deep Belief Networks for Image Denoising | cs.LG cs.CV cs.NE | Deep Belief Networks which are hierarchical generative models are effective
tools for feature representation and extraction. Furthermore, DBNs can be used
in numerous aspects of Machine Learning such as image denoising. In this paper,
we propose a novel method for image denoising which relies on the DBNs' ability
in feature representation. This work is based upon learning of the noise
behavior. Generally, features which are extracted using DBNs are presented as
the values of the last layer nodes. We train a DBN a way that the network
totally distinguishes between nodes presenting noise and nodes presenting image
content in the last later of DBN, i.e. the nodes in the last layer of trained
DBN are divided into two distinct groups of nodes. After detecting the nodes
which are presenting the noise, we are able to make the noise nodes inactive
and reconstruct a noiseless image. In section 4 we explore the results of
applying this method on the MNIST dataset of handwritten digits which is
corrupted with additive white Gaussian noise (AWGN). A reduction of 65.9% in
average mean square error (MSE) was achieved when the proposed method was used
for the reconstruction of the noisy images.
| Mohammad Ali Keyvanrad, Mohammad Pezeshki, and Mohammad Ali
Homayounpour | null | 1312.6158 | null | null |
Factorial Hidden Markov Models for Learning Representations of Natural
Language | cs.LG cs.CL | Most representation learning algorithms for language and image processing are
local, in that they identify features for a data point based on surrounding
points. Yet in language processing, the correct meaning of a word often depends
on its global context. As a step toward incorporating global context into
representation learning, we develop a representation learning algorithm that
incorporates joint prediction into its technique for producing features for a
word. We develop efficient variational methods for learning Factorial Hidden
Markov Models from large texts, and use variational distributions to produce
features for each word that are sensitive to the entire input sequence, not
just to a local context window. Experiments on part-of-speech tagging and
chunking indicate that the features are competitive with or better than
existing state-of-the-art representation learning methods.
| Anjan Nepal and Alexander Yates | null | 1312.6168 | null | null |
Learning Information Spread in Content Networks | cs.LG cs.SI physics.soc-ph | We introduce a model for predicting the diffusion of content information on
social media. When propagation is usually modeled on discrete graph structures,
we introduce here a continuous diffusion model, where nodes in a diffusion
cascade are projected onto a latent space with the property that their
proximity in this space reflects the temporal diffusion process. We focus on
the task of predicting contaminated users for an initial initial information
source and provide preliminary results on differents datasets.
| C\'edric Lagnier, Simon Bourigault, Sylvain Lamprier, Ludovic Denoyer
and Patrick Gallinari | null | 1312.6169 | null | null |
Learning Paired-associate Images with An Unsupervised Deep Learning
Architecture | cs.NE cs.CV cs.LG | This paper presents an unsupervised multi-modal learning system that learns
associative representation from two input modalities, or channels, such that
input on one channel will correctly generate the associated response at the
other and vice versa. In this way, the system develops a kind of supervised
classification model meant to simulate aspects of human associative memory. The
system uses a deep learning architecture (DLA) composed of two input/output
channels formed from stacked Restricted Boltzmann Machines (RBM) and an
associative memory network that combines the two channels. The DLA is trained
on pairs of MNIST handwritten digit images to develop hierarchical features and
associative representations that are able to reconstruct one image given its
paired-associate. Experiments show that the multi-modal learning system
generates models that are as accurate as back-propagation networks but with the
advantage of a bi-directional network and unsupervised learning from either
paired or non-paired training examples.
| Ti Wang and Daniel L. Silver | null | 1312.6171 | null | null |
Manifold regularized kernel logistic regression for web image annotation | cs.LG cs.MM | With the rapid advance of Internet technology and smart devices, users often
need to manage large amounts of multimedia information using smart devices,
such as personal image and video accessing and browsing. These requirements
heavily rely on the success of image (video) annotation, and thus large scale
image annotation through innovative machine learning methods has attracted
intensive attention in recent years. One representative work is support vector
machine (SVM). Although it works well in binary classification, SVM has a
non-smooth loss function and can not naturally cover multi-class case. In this
paper, we propose manifold regularized kernel logistic regression (KLR) for web
image annotation. Compared to SVM, KLR has the following advantages: (1) the
KLR has a smooth loss function; (2) the KLR produces an explicit estimate of
the probability instead of class label; and (3) the KLR can naturally be
generalized to the multi-class case. We carefully conduct experiments on MIR
FLICKR dataset and demonstrate the effectiveness of manifold regularized kernel
logistic regression for image annotation.
| W. Liu, H. Liu, D.Tao, Y. Wang, K. Lu | null | 1312.6180 | null | null |
Large-Scale Paralleled Sparse Principal Component Analysis | cs.MS cs.LG cs.NA stat.ML | Principal component analysis (PCA) is a statistical technique commonly used
in multivariate data analysis. However, PCA can be difficult to interpret and
explain since the principal components (PCs) are linear combinations of the
original variables. Sparse PCA (SPCA) aims to balance statistical fidelity and
interpretability by approximating sparse PCs whose projections capture the
maximal variance of original data. In this paper we present an efficient and
paralleled method of SPCA using graphics processing units (GPUs), which can
process large blocks of data in parallel. Specifically, we construct parallel
implementations of the four optimization formulations of the generalized power
method of SPCA (GP-SPCA), one of the most efficient and effective SPCA
approaches, on a GPU. The parallel GPU implementation of GP-SPCA (using CUBLAS)
is up to eleven times faster than the corresponding CPU implementation (using
CBLAS), and up to 107 times faster than a MatLab implementation. Extensive
comparative experiments in several real-world datasets confirm that SPCA offers
a practical advantage.
| W. Liu, H. Zhang, D. Tao, Y. Wang, K. Lu | null | 1312.6182 | null | null |
Do Deep Nets Really Need to be Deep? | cs.LG cs.NE | Currently, deep neural networks are the state of the art on problems such as
speech recognition and computer vision. In this extended abstract, we show that
shallow feed-forward networks can learn the complex functions previously
learned by deep nets and achieve accuracies previously only achievable with
deep models. Moreover, in some cases the shallow neural nets can learn these
deep functions using a total number of parameters similar to the original deep
model. We evaluate our method on the TIMIT phoneme recognition task and are
able to train shallow fully-connected nets that perform similarly to complex,
well-engineered, deep convolutional architectures. Our success in training
shallow neural nets to mimic deeper models suggests that there probably exist
better algorithms for training shallow feed-forward nets than those currently
available.
| Lei Jimmy Ba, Rich Caruana | null | 1312.6184 | null | null |
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network
Training | cs.CV cs.DC cs.LG cs.NE | The ability to train large-scale neural networks has resulted in
state-of-the-art performance in many areas of computer vision. These results
have largely come from computational break throughs of two forms: model
parallelism, e.g. GPU accelerated training, which has seen quick adoption in
computer vision circles, and data parallelism, e.g. A-SGD, whose large scale
has been used mostly in industry. We report early experiments with a system
that makes use of both model parallelism and data parallelism, we call GPU
A-SGD. We show using GPU A-SGD it is possible to speed up training of large
convolutional neural networks useful for computer vision. We believe GPU A-SGD
will make it possible to train larger networks on larger training sets in a
reasonable amount of time.
| Thomas Paine, Hailin Jin, Jianchao Yang, Zhe Lin, Thomas Huang | null | 1312.6186 | null | null |
Adaptive Feature Ranking for Unsupervised Transfer Learning | cs.LG | Transfer Learning is concerned with the application of knowledge gained from
solving a problem to a different but related problem domain. In this paper, we
propose a method and efficient algorithm for ranking and selecting
representations from a Restricted Boltzmann Machine trained on a source domain
to be transferred onto a target domain. Experiments carried out using the
MNIST, ICDAR and TiCC image datasets show that the proposed adaptive feature
ranking and transfer learning method offers statistically significant
improvements on the training of RBMs. Our method is general in that the
knowledge chosen by the ranking function does not depend on its relation to any
specific target domain, and it works with unsupervised learning and
knowledge-based transfer.
| Son N. Tran, Artur d'Avila Garcez | null | 1312.6190 | null | null |
Can recursive neural tensor networks learn logical reasoning? | cs.CL cs.LG | Recursive neural network models and their accompanying vector representations
for words have seen success in an array of increasingly semantically
sophisticated tasks, but almost nothing is known about their ability to
accurately capture the aspects of linguistic meaning that are necessary for
interpretation or reasoning. To evaluate this, I train a recursive model on a
new corpus of constructed examples of logical reasoning in short sentences,
like the inference of "some animal walks" from "some dog walks" or "some cat
walks," given that dogs and cats are animals. This model learns representations
that generalize well to new types of reasoning pattern in all but a few cases,
a result which is promising for the ability of learned representation models to
capture logical reasoning.
| Samuel R. Bowman | null | 1312.6192 | null | null |
An empirical analysis of dropout in piecewise linear networks | stat.ML cs.LG cs.NE | The recently introduced dropout training criterion for neural networks has
been the subject of much attention due to its simplicity and remarkable
effectiveness as a regularizer, as well as its interpretation as a training
procedure for an exponentially large ensemble of networks that share
parameters. In this work we empirically investigate several questions related
to the efficacy of dropout, specifically as it concerns networks employing the
popular rectified linear activation function. We investigate the quality of the
test time weight-scaling inference procedure by evaluating the geometric
average exactly in small models, as well as compare the performance of the
geometric mean to the arithmetic mean more commonly employed by ensemble
techniques. We explore the effect of tied weights on the ensemble
interpretation by training ensembles of masked networks without tied weights.
Finally, we investigate an alternative criterion based on a biased estimator of
the maximum likelihood ensemble gradient.
| David Warde-Farley, Ian J. Goodfellow, Aaron Courville and Yoshua
Bengio | null | 1312.6197 | null | null |
Intriguing properties of neural networks | cs.CV cs.LG cs.NE | Deep neural networks are highly expressive models that have recently achieved
state of the art performance on speech and visual recognition tasks. While
their expressiveness is the reason they succeed, it also causes them to learn
uninterpretable solutions that could have counter-intuitive properties. In this
paper we report two such properties.
First, we find that there is no distinction between individual high level
units and random linear combinations of high level units, according to various
methods of unit analysis. It suggests that it is the space, rather than the
individual units, that contains of the semantic information in the high layers
of neural networks.
Second, we find that deep neural networks learn input-output mappings that
are fairly discontinuous to a significant extend. We can cause the network to
misclassify an image by applying a certain imperceptible perturbation, which is
found by maximizing the network's prediction error. In addition, the specific
nature of these perturbations is not a random artifact of learning: the same
perturbation can cause a different network, that was trained on a different
subset of the dataset, to misclassify the same input.
| Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna,
Dumitru Erhan, Ian Goodfellow, Rob Fergus | null | 1312.6199 | null | null |
Spectral Networks and Locally Connected Networks on Graphs | cs.LG cs.CV cs.NE | Convolutional Neural Networks are extremely efficient architectures in image
and audio recognition tasks, thanks to their ability to exploit the local
translational invariance of signal classes over their domain. In this paper we
consider possible generalizations of CNNs to signals defined on more general
domains without the action of a translation group. In particular, we propose
two constructions, one based upon a hierarchical clustering of the domain, and
another based on the spectrum of the graph Laplacian. We show through
experiments that for low-dimensional graphs it is possible to learn
convolutional layers with a number of parameters independent of the input size,
resulting in efficient deep architectures.
| Joan Bruna, Wojciech Zaremba, Arthur Szlam and Yann LeCun | null | 1312.6203 | null | null |
One-Shot Adaptation of Supervised Deep Convolutional Models | cs.CV cs.LG cs.NE | Dataset bias remains a significant barrier towards solving real world
computer vision tasks. Though deep convolutional networks have proven to be a
competitive approach for image classification, a question remains: have these
models have solved the dataset bias problem? In general, training or
fine-tuning a state-of-the-art deep model on a new domain requires a
significant amount of data, which for many applications is simply not
available. Transfer of models directly to new domains without adaptation has
historically led to poor recognition performance. In this paper, we pose the
following question: is a single image dataset, much larger than previously
explored for adaptation, comprehensive enough to learn general deep models that
may be effectively applied to new image domains? In other words, are deep CNNs
trained on large amounts of labeled data as susceptible to dataset bias as
previous methods have been shown to be? We show that a generic supervised deep
CNN model trained on a large dataset reduces, but does not remove, dataset
bias. Furthermore, we propose several methods for adaptation with deep models
that are able to operate with little (one example per category) or no labeled
domain specific data. Our experiments show that adaptation of deep models on
benchmark visual domain adaptation datasets can provide a significant
performance boost.
| Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko,
Trevor Darrell | null | 1312.6204 | null | null |
Relaxations for inference in restricted Boltzmann machines | stat.ML cs.LG | We propose a relaxation-based approximate inference algorithm that samples
near-MAP configurations of a binary pairwise Markov random field. We experiment
on MAP inference tasks in several restricted Boltzmann machines. We also use
our underlying sampler to estimate the log-partition function of restricted
Boltzmann machines and compare against other sampling-based methods.
| Sida I. Wang, Roy Frostig, Percy Liang, Christopher D. Manning | null | 1312.6205 | null | null |
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based
Neural Networks | stat.ML cs.LG cs.NE | Catastrophic forgetting is a problem faced by many machine learning models
and algorithms. When trained on one task, then trained on a second task, many
machine learning models "forget" how to perform the first task. This is widely
believed to be a serious problem for neural networks. Here, we investigate the
extent to which the catastrophic forgetting problem occurs for modern neural
networks, comparing both established and recent gradient-based training
algorithms and activation functions. We also examine the effect of the
relationship between the first task and the second task on catastrophic
forgetting. We find that it is always best to train using the dropout
algorithm--the dropout algorithm is consistently best at adapting to the new
task, remembering the old task, and has the best tradeoff curve between these
two extremes. We find that different tasks and relationships between tasks
result in very different rankings of activation function performance. This
suggests the choice of activation function should always be cross-validated.
| Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, Yoshua
Bengio | null | 1312.6211 | null | null |
Volumetric Spanners: an Efficient Exploration Basis for Learning | cs.LG cs.AI cs.DS | Numerous machine learning problems require an exploration basis - a mechanism
to explore the action space. We define a novel geometric notion of exploration
basis with low variance, called volumetric spanners, and give efficient
algorithms to construct such a basis.
We show how efficient volumetric spanners give rise to the first efficient
and optimal regret algorithm for bandit linear optimization over general convex
sets. Previously such results were known only for specific convex sets, or
under special conditions such as the existence of an efficient self-concordant
barrier for the underlying set.
| Elad Hazan and Zohar Karnin and Raghu Mehka | null | 1312.6214 | null | null |
Parallel architectures for fuzzy triadic similarity learning | cs.DC cs.LG stat.ML | In a context of document co-clustering, we define a new similarity measure
which iteratively computes similarity while combining fuzzy sets in a
three-partite graph. The fuzzy triadic similarity (FT-Sim) model can deal with
uncertainty offers by the fuzzy sets. Moreover, with the development of the Web
and the high availability of storage spaces, more and more documents become
accessible. Documents can be provided from multiple sites and make similarity
computation an expensive processing. This problem motivated us to use parallel
computing. In this paper, we introduce parallel architectures which are able to
treat large and multi-source data sets by a sequential, a merging or a
splitting-based process. Then, we proceed to a local and a central (or global)
computing using the basic FT-Sim measure. The idea behind these architectures
is to reduce both time and space complexities thanks to parallel computation.
| Sonia Alouane-Ksouri, Minyar Sassi-Hidri, Kamel Barkaoui | null | 1312.6273 | null | null |
Dimension-free Concentration Bounds on Hankel Matrices for Spectral
Learning | cs.LG | Learning probabilistic models over strings is an important issue for many
applications. Spectral methods propose elegant solutions to the problem of
inferring weighted automata from finite samples of variable-length strings
drawn from an unknown target distribution. These methods rely on a singular
value decomposition of a matrix $H_S$, called the Hankel matrix, that records
the frequencies of (some of) the observed strings. The accuracy of the learned
distribution depends both on the quantity of information embedded in $H_S$ and
on the distance between $H_S$ and its mean $H_r$. Existing concentration bounds
seem to indicate that the concentration over $H_r$ gets looser with the size of
$H_r$, suggesting to make a trade-off between the quantity of used information
and the size of $H_r$. We propose new dimension-free concentration bounds for
several variants of Hankel matrices. Experiments demonstrate that these bounds
are tight and that they significantly improve existing bounds. These results
suggest that the concentration rate of the Hankel matrix around its mean does
not constitute an argument for limiting its size.
| Fran\c{c}ois Denis, Mattias Gybels and Amaury Habrard | null | 1312.6282 | null | null |
Growing Regression Forests by Classification: Applications to Object
Pose Estimation | cs.CV cs.LG stat.ML | In this work, we propose a novel node splitting method for regression trees
and incorporate it into the regression forest framework. Unlike traditional
binary splitting, where the splitting rule is selected from a predefined set of
binary splitting rules via trial-and-error, the proposed node splitting method
first finds clusters of the training data which at least locally minimize the
empirical loss without considering the input space. Then splitting rules which
preserve the found clusters as much as possible are determined by casting the
problem into a classification problem. Consequently, our new node splitting
method enjoys more freedom in choosing the splitting rules, resulting in more
efficient tree structures. In addition to the Euclidean target space, we
present a variant which can naturally deal with a circular target space by the
proper use of circular statistics. We apply the regression forest employing our
node splitting to head pose estimation (Euclidean target space) and car
direction estimation (circular target space) and demonstrate that the proposed
method significantly outperforms state-of-the-art methods (38.5% and 22.5%
error reduction respectively).
| Kota Hara and Rama Chellappa | null | 1312.6430 | null | null |
Nonparametric Weight Initialization of Neural Networks via Integral
Representation | cs.LG cs.NE | A new initialization method for hidden parameters in a neural network is
proposed. Derived from the integral representation of the neural network, a
nonparametric probability distribution of hidden parameters is introduced. In
this proposal, hidden parameters are initialized by samples drawn from this
distribution, and output parameters are fitted by ordinary linear regression.
Numerical experiments show that backpropagation with proposed initialization
converges faster than uniformly random initialization. Also it is shown that
the proposed method achieves enough accuracy by itself without backpropagation
in some cases.
| Sho Sonoda, Noboru Murata | null | 1312.6461 | null | null |
Sequentially Generated Instance-Dependent Image Representations for
Classification | cs.CV cs.LG | In this paper, we investigate a new framework for image classification that
adaptively generates spatial representations. Our strategy is based on a
sequential process that learns to explore the different regions of any image in
order to infer its category. In particular, the choice of regions is specific
to each image, directed by the actual content of previously selected
regions.The capacity of the system to handle incomplete image information as
well as its adaptive region selection allow the system to perform well in
budgeted classification tasks by exploiting a dynamicly generated
representation of each image. We demonstrate the system's abilities in a series
of image-based exploration and classification tasks that highlight its learned
exploration and inference abilities.
| Gabriel Dulac-Arnold and Ludovic Denoyer and Nicolas Thome and
Matthieu Cord and Patrick Gallinari | null | 1312.6594 | null | null |
Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning | cs.LG cs.IR | In this work, we propose two stochastic architectural models (CMC and CMC-M)
with two layers of classifiers applicable to datasets with one and multiple
skewed classes. This distinction becomes important when the datasets have a
large number of classes. Therefore, we present a novel solution to imbalanced
multiclass learning with several skewed majority classes, which improves
minority classes identification. This fact is particularly important for text
classification tasks, such as event detection. Our models combined with
pre-processing sampling techniques improved the classification results on six
well-known datasets. Finally, we have also introduced a new metric SG-Mean to
overcome the multiplication by zero limitation of G-Mean.
| Luis Marujo, Anatole Gershman, Jaime Carbonell, David Martins de
Matos, Jo\~ao P. Neto | null | 1312.6597 | null | null |
Using Latent Binary Variables for Online Reconstruction of Large Scale
Systems | math.PR cs.LG stat.ML | We propose a probabilistic graphical model realizing a minimal encoding of
real variables dependencies based on possibly incomplete observation and an
empirical cumulative distribution function per variable. The target application
is a large scale partially observed system, like e.g. a traffic network, where
a small proportion of real valued variables are observed, and the other
variables have to be predicted. Our design objective is therefore to have good
scalability in a real-time setting. Instead of attempting to encode the
dependencies of the system directly in the description space, we propose a way
to encode them in a latent space of binary variables, reflecting a rough
perception of the observable (congested/non-congested for a traffic road). The
method relies in part on message passing algorithms, i.e. belief propagation,
but the core of the work concerns the definition of meaningful latent variables
associated to the variables of interest and their pairwise dependencies.
Numerical experiments demonstrate the applicability of the method in practice.
| Victorin Martin, Jean-Marc Lasgouttes, Cyril Furtlehner | 10.1007/s10472-015-9470-x | 1312.6607 | null | null |
Rounding Sum-of-Squares Relaxations | cs.DS cs.LG quant-ph | We present a general approach to rounding semidefinite programming
relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our
approach is based on using the connection between these relaxations and the
Sum-of-Squares proof system to transform a *combining algorithm* -- an
algorithm that maps a distribution over solutions into a (possibly weaker)
solution -- into a *rounding algorithm* that maps a solution of the relaxation
to a solution of the original problem.
Using this approach, we obtain algorithms that yield improved results for
natural variants of three well-known problems:
1) We give a quasipolynomial-time algorithm that approximates the maximum of
a low degree multivariate polynomial with non-negative coefficients over the
Euclidean unit sphere. Beyond being of interest in its own right, this is
related to an open question in quantum information theory, and our techniques
have already led to improved results in this area (Brand\~{a}o and Harrow, STOC
'13).
2) We give a polynomial-time algorithm that, given a d dimensional subspace
of R^n that (almost) contains the characteristic function of a set of size n/k,
finds a vector $v$ in the subspace satisfying $|v|_4^4 > c(k/d^{1/3}) |v|_2^2$,
where $|v|_p = (E_i v_i^p)^{1/p}$. Aside from being a natural relaxation, this
is also motivated by a connection to the Small Set Expansion problem shown by
Barak et al. (STOC 2012) and our results yield a certain improvement for that
problem.
3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time
algorithm with substantially improved guarantees for recovering a planted
$\mu$-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n
nonzero coordinates, we can recover it with high probability whenever $\mu <
O(\min(1,n/d^2))$, improving for $d < n^{2/3}$ prior methods which
intrinsically required $\mu < O(1/\sqrt(d))$.
| Boaz Barak, Jonathan Kelner, David Steurer | null | 1312.6652 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.