title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets | cs.NE cs.LG | Despite the recent achievements in machine learning, we are still very far
from achieving real artificial intelligence. In this paper, we discuss the
limitations of standard deep learning approaches and show that some of these
limitations can be overcome by learning how to grow the complexity of a model
in a structured way. Specifically, we study the simplest sequence prediction
problems that are beyond the scope of what is learnable with standard recurrent
networks, algorithmically generated sequences which can only be learned by
models which have the capacity to count and to memorize sequences. We show that
some basic algorithms can be learned from sequential data using a recurrent
network associated with a trainable memory.
| Armand Joulin, Tomas Mikolov | null | 1503.01007 | null | null |
Kernel Interpolation for Scalable Structured Gaussian Processes
(KISS-GP) | cs.LG stat.ML | We introduce a new structured kernel interpolation (SKI) framework, which
generalises and unifies inducing point methods for scalable Gaussian processes
(GPs). SKI methods produce kernel approximations for fast computations through
kernel interpolation. The SKI framework clarifies how the quality of an
inducing point approach depends on the number of inducing (aka interpolation)
points, interpolation strategy, and GP covariance kernel. SKI also provides a
mechanism to create new scalable kernel methods, through choosing different
kernel interpolation strategies. Using SKI, with local cubic kernel
interpolation, we introduce KISS-GP, which is 1) more scalable than inducing
point alternatives, 2) naturally enables Kronecker and Toeplitz algebra for
substantial additional gains in scalability, without requiring any grid data,
and 3) can be used for fast and expressive kernel learning. KISS-GP costs O(n)
time and storage for GP inference. We evaluate KISS-GP for kernel matrix
approximation, kernel learning, and natural sound modelling.
| Andrew Gordon Wilson, Hannes Nickisch | null | 1503.01057 | null | null |
A Meta-Analysis of the Anomaly Detection Problem | cs.AI cs.LG stat.ML | This article provides a thorough meta-analysis of the anomaly detection
problem. To accomplish this we first identify approaches to benchmarking
anomaly detection algorithms across the literature and produce a large corpus
of anomaly detection benchmarks that vary in their construction across several
dimensions we deem important to real-world applications: (a) point difficulty,
(b) relative frequency of anomalies, (c) clusteredness of anomalies, and (d)
relevance of features. We apply a representative set of anomaly detection
algorithms to this corpus, yielding a very large collection of experimental
results. We analyze these results to understand many phenomena observed in
previous work. First we observe the effects of experimental design on
experimental results. Second, results are evaluated with two metrics, ROC Area
Under the Curve and Average Precision. We employ statistical hypothesis testing
to demonstrate the value (or lack thereof) of our benchmarks. We then offer
several approaches to summarizing our experimental results, drawing several
conclusions about the impact of our methodology as well as the strengths and
weaknesses of some algorithms. Last, we compare results against a trivial
solution as an alternate means of normalizing the reported performance of
algorithms. The intended contributions of this article are many; in addition to
providing a large publicly-available corpus of anomaly detection benchmarks, we
provide an ontology for describing anomaly detection contexts, a methodology
for controlling various aspects of benchmark creation, guidelines for future
experimental design and a discussion of the many potential pitfalls of trying
to measure success in this field.
| Andrew Emmott, Shubhomoy Das, Thomas Dietterich, Alan Fern and
Weng-Keen Wong | null | 1503.01158 | null | null |
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning
and Prototype Classification | stat.ML cs.LG | We present the Bayesian Case Model (BCM), a general framework for Bayesian
case-based reasoning (CBR) and prototype classification and clustering. BCM
brings the intuitive power of CBR to a Bayesian generative framework. The BCM
learns prototypes, the "quintessential" observations that best represent
clusters in a dataset, by performing joint inference on cluster labels,
prototypes and important features. Simultaneously, BCM pursues sparsity by
learning subspaces, the sets of features that play important roles in the
characterization of the prototypes. The prototype and subspace representation
provides quantitative benefits in interpretability while preserving
classification accuracy. Human subject experiments verify statistically
significant improvements to participants' understanding when using explanations
produced by BCM, compared to those given by prior art.
| Been Kim, Cynthia Rudin and Julie Shah | null | 1503.01161 | null | null |
A General Hybrid Clustering Technique | stat.ML cs.LG | Here, we propose a clustering technique for general clustering problems
including those that have non-convex clusters. For a given desired number of
clusters $K$, we use three stages to find a clustering. The first stage uses a
hybrid clustering technique to produce a series of clusterings of various sizes
(randomly selected). They key steps are to find a $K$-means clustering using
$K_\ell$ clusters where $K_\ell \gg K$ and then joins these small clusters by
using single linkage clustering. The second stage stabilizes the result of
stage one by reclustering via the `membership matrix' under Hamming distance to
generate a dendrogram. The third stage is to cut the dendrogram to get $K^*$
clusters where $K^* \geq K$ and then prune back to $K$ to give a final
clustering. A variant on our technique also gives a reasonable estimate for
$K_T$, the true number of clusters.
We provide a series of arguments to justify the steps in the stages of our
methods and we provide numerous examples involving real and simulated data to
compare our technique with other related techniques.
| Saeid Amiri, Bertrand Clarke, Jennifer Clarke and Hoyt A. Koepke | null | 1503.01183 | null | null |
Statistical modality tagging from rule-based annotations and
crowdsourcing | cs.CL cs.LG stat.ML | We explore training an automatic modality tagger. Modality is the attitude
that a speaker might have toward an event or state. One of the main hurdles for
training a linguistic tagger is gathering training data. This is particularly
problematic for training a tagger for modality because modality triggers are
sparse for the overwhelming majority of sentences. We investigate an approach
to automatically training a modality tagger where we first gathered sentences
based on a high-recall simple rule-based modality tagger and then provided
these sentences to Mechanical Turk annotators for further annotation. We used
the resulting set of training data to train a precise modality tagger using a
multi-class SVM that delivers good performance.
| Vinodkumar Prabhakaran, Michael Bloodgood, Mona Diab, Bonnie Dorr,
Lori Levin, Christine D. Piatko, Owen Rambow and Benjamin Van Durme | null | 1503.01190 | null | null |
Hierarchies of Relaxations for Online Prediction Problems with Evolving
Constraints | cs.LG cs.DS stat.ML | We study online prediction where regret of the algorithm is measured against
a benchmark defined via evolving constraints. This framework captures online
prediction on graphs, as well as other prediction problems with combinatorial
structure. A key aspect here is that finding the optimal benchmark predictor
(even in hindsight, given all the data) might be computationally hard due to
the combinatorial nature of the constraints. Despite this, we provide
polynomial-time \emph{prediction} algorithms that achieve low regret against
combinatorial benchmark sets. We do so by building improper learning algorithms
based on two ideas that work together. The first is to alleviate part of the
computational burden through random playout, and the second is to employ
Lasserre semidefinite hierarchies to approximate the resulting integer program.
Interestingly, for our prediction algorithms, we only need to compute the
values of the semidefinite programs and not the rounded solutions. However, the
integrality gap for Lasserre hierarchy \emph{does} enter the generic regret
bound in terms of Rademacher complexity of the benchmark set. This establishes
a trade-off between the computation time and the regret bound of the algorithm.
| Alexander Rakhlin, Karthik Sridharan | null | 1503.01212 | null | null |
Bethe Learning of Conditional Random Fields via MAP Decoding | cs.LG cs.CV stat.ML | Many machine learning tasks can be formulated in terms of predicting
structured outputs. In frameworks such as the structured support vector machine
(SVM-Struct) and the structured perceptron, discriminative functions are
learned by iteratively applying efficient maximum a posteriori (MAP) decoding.
However, maximum likelihood estimation (MLE) of probabilistic models over these
same structured spaces requires computing partition functions, which is
generally intractable. This paper presents a method for learning discrete
exponential family models using the Bethe approximation to the MLE. Remarkably,
this problem also reduces to iterative (MAP) decoding. This connection emerges
by combining the Bethe approximation with a Frank-Wolfe (FW) algorithm on a
convex dual objective which circumvents the intractable partition function. The
result is a new single loop algorithm MLE-Struct, which is substantially more
efficient than previous double-loop methods for approximate maximum likelihood
estimation. Our algorithm outperforms existing methods in experiments involving
image segmentation, matching problems from vision, and a new dataset of
university roommate assignments.
| Kui Tang, Nicholas Ruozzi, David Belanger, Tony Jebara | null | 1503.01228 | null | null |
Joint Active Learning with Feature Selection via CUR Matrix
Decomposition | cs.LG | This paper presents an unsupervised learning approach for simultaneous sample
and feature selection, which is in contrast to existing works which mainly
tackle these two problems separately. In fact the two tasks are often
interleaved with each other: noisy and high-dimensional features will bring
adverse effect on sample selection, while informative or representative samples
will be beneficial to feature selection. Specifically, we propose a framework
to jointly conduct active learning and feature selection based on the CUR
matrix decomposition. From the data reconstruction perspective, both the
selected samples and features can best approximate the original dataset
respectively, such that the selected samples characterized by the features are
highly representative. In particular, our method runs in one-shot without the
procedure of iterative sample selection for progressive labeling. Thus, our
model is especially suitable when there are few labeled samples or even in the
absence of supervision, which is a particular challenge for existing methods.
As the joint learning problem is NP-hard, the proposed formulation involves a
convex but non-smooth optimization problem. We solve it efficiently by an
iterative algorithm, and prove its global convergence. Experimental results on
publicly available datasets corroborate the efficacy of our method compared
with the state-of-the-art.
| Changsheng Li and Xiangfeng Wang and Weishan Dong and Junchi Yan and
Qingshan Liu and Hongyuan Zha | null | 1503.01239 | null | null |
Bethe Projections for Non-Local Inference | stat.ML cs.CL cs.LG | Many inference problems in structured prediction are naturally solved by
augmenting a tractable dependency structure with complex, non-local auxiliary
objectives. This includes the mean field family of variational inference
algorithms, soft- or hard-constrained inference using Lagrangian relaxation or
linear programming, collective graphical models, and forms of semi-supervised
learning such as posterior regularization. We present a method to
discriminatively learn broad families of inference objectives, capturing
powerful non-local statistics of the latent variables, while maintaining
tractable and provably fast inference using non-Euclidean projected gradient
descent with a distance-generating function given by the Bethe entropy. We
demonstrate the performance and flexibility of our method by (1) extracting
structured citations from research papers by learning soft global constraints,
(2) achieving state-of-the-art results on a widely-used handwriting recognition
task using a novel learned non-convex inference procedure, and (3) providing a
fast and highly scalable algorithm for the challenging problem of inference in
a collective graphical model applied to bird migration.
| Luke Vilnis and David Belanger and Daniel Sheldon and Andrew McCallum | null | 1503.01397 | null | null |
Probabilistic Label Relation Graphs with Ising Models | cs.LG | We consider classification problems in which the label space has structure. A
common example is hierarchical label spaces, corresponding to the case where
one label subsumes another (e.g., animal subsumes dog). But labels can also be
mutually exclusive (e.g., dog vs cat) or unrelated (e.g., furry, carnivore). To
jointly model hierarchy and exclusion relations, the notion of a HEX (hierarchy
and exclusion) graph was introduced in [7]. This combined a conditional random
field (CRF) with a deep neural network (DNN), resulting in state of the art
results when applied to visual object classification problems where the
training labels were drawn from different levels of the ImageNet hierarchy
(e.g., an image might be labeled with the basic level category "dog", rather
than the more specific label "husky"). In this paper, we extend the HEX model
to allow for soft or probabilistic relations between labels, which is useful
when there is uncertainty about the relationship between two labels (e.g., an
antelope is "sort of" furry, but not to the same degree as a grizzly bear). We
call our new model pHEX, for probabilistic HEX. We show that the pHEX graph can
be converted to an Ising model, which allows us to use existing off-the-shelf
inference methods (in contrast to the HEX method, which needed specialized
inference algorithms). Experimental results show significant improvements in a
number of large-scale visual object classification tasks, outperforming the
previous HEX model.
| Nan Ding and Jia Deng and Kevin Murphy and Hartmut Neven | null | 1503.01428 | null | null |
Class Probability Estimation via Differential Geometric Regularization | cs.LG cs.CG stat.ML | We study the problem of supervised learning for both binary and multiclass
classification from a unified geometric perspective. In particular, we propose
a geometric regularization technique to find the submanifold corresponding to a
robust estimator of the class probability $P(y|\pmb{x})$. The regularization
term measures the volume of this submanifold, based on the intuition that
overfitting produces rapid local oscillations and hence large volume of the
estimator. This technique can be applied to regularize any classification
function that satisfies two requirements: firstly, an estimator of the class
probability can be obtained; secondly, first and second derivatives of the
class probability estimator can be calculated. In experiments, we apply our
regularization technique to standard loss functions for classification, our
RBF-based implementation compares favorably to widely used regularization
methods for both binary and multiclass classification.
| Qinxun Bai, Steven Rosenberg, Zheng Wu, Stan Sclaroff | null | 1503.01436 | null | null |
Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and
Applications | cs.CV cs.AI cs.LG | Robust Principal Component Analysis (RPCA) via rank minimization is a
powerful tool for recovering underlying low-rank structure of clean data
corrupted with sparse noise/outliers. In many low-level vision problems, not
only it is known that the underlying structure of clean data is low-rank, but
the exact rank of clean data is also known. Yet, when applying conventional
rank minimization for those problems, the objective function is formulated in a
way that does not fully utilize a priori target rank information about the
problems. This observation motivates us to investigate whether there is a
better alternative solution when using rank minimization. In this paper,
instead of minimizing the nuclear norm, we propose to minimize the partial sum
of singular values, which implicitly encourages the target rank constraint. Our
experimental analyses show that, when the number of samples is deficient, our
approach leads to a higher success rate than conventional rank minimization,
while the solutions obtained by the two approaches are almost identical when
the number of samples is more than sufficient. We apply our approach to various
low-level vision problems, e.g. high dynamic range imaging, motion edge
detection, photometric stereo, image alignment and recovery, and show that our
results outperform those obtained by the conventional nuclear norm rank
minimization method.
| Tae-Hyun Oh, Yu-Wing Tai, Jean-Charles Bazin, Hyeongwoo Kim, In So
Kweon | null | 1503.01444 | null | null |
Toxicity Prediction using Deep Learning | stat.ML cs.LG cs.NE q-bio.BM | Everyday we are exposed to various chemicals via food additives, cleaning and
cosmetic products and medicines -- and some of them might be toxic. However
testing the toxicity of all existing compounds by biological experiments is
neither financially nor logistically feasible. Therefore the government
agencies NIH, EPA and FDA launched the Tox21 Data Challenge within the
"Toxicology in the 21st Century" (Tox21) initiative. The goal of this challenge
was to assess the performance of computational methods in predicting the
toxicity of chemical compounds. State of the art toxicity prediction methods
build upon specifically-designed chemical descriptors developed over decades.
Though Deep Learning is new to the field and was never applied to toxicity
prediction before, it clearly outperformed all other participating methods. In
this application paper we show that deep nets automatically learn features
resembling well-established toxicophores. In total, our Deep Learning approach
won both of the panel-challenges (nuclear receptors and stress response) as
well as the overall Grand Challenge, and thereby sets a new standard in tox
prediction.
| Thomas Unterthiner, Andreas Mayr, G\"unter Klambauer, Sepp Hochreiter | null | 1503.01445 | null | null |
Jointly Learning Multiple Measures of Similarities from Triplet
Comparisons | stat.ML cs.AI cs.CV cs.LG | Similarity between objects is multi-faceted and it can be easier for human
annotators to measure it when the focus is on a specific aspect. We consider
the problem of mapping objects into view-specific embeddings where the distance
between them is consistent with the similarity comparisons of the form "from
the t-th view, object A is more similar to B than to C". Our framework jointly
learns view-specific embeddings exploiting correlations between views.
Experiments on a number of datasets, including one of multi-view crowdsourced
comparison on bird images, show the proposed method achieves lower triplet
generalization error when compared to both learning embeddings independently
for each view and all views pooled into one view. Our method can also be used
to learn multiple measures of similarity over input features taking class
labels into account and compares favorably to existing approaches for
multi-task metric learning on the ISOLET dataset.
| Liwen Zhang, Subhransu Maji, Ryota Tomioka | null | 1503.01521 | null | null |
Scalable Iterative Algorithm for Robust Subspace Clustering | cs.DS cs.LG | Subspace clustering (SC) is a popular method for dimensionality reduction of
high-dimensional data, where it generalizes Principal Component Analysis (PCA).
Recently, several methods have been proposed to enhance the robustness of PCA
and SC, while most of them are computationally very expensive, in particular,
for high dimensional large-scale data. In this paper, we develop much faster
iterative algorithms for SC, incorporating robustness using a {\em non-squared}
$\ell_2$-norm objective. The known implementations for optimizing the objective
would be costly due to the alternative optimization of two separate objectives:
optimal cluster-membership assignment and robust subspace selection, while the
substitution of one process to a faster surrogate can cause failure in
convergence. To address the issue, we use a simplified procedure requiring
efficient matrix-vector multiplications for subspace update instead of solving
an expensive eigenvector problem at each iteration, in addition to release
nested robust PCA loops. We prove that the proposed algorithm monotonically
converges to a local minimum with approximation guarantees, e.g., it achieves
2-approximation for the robust PCA objective. In our experiments, the proposed
algorithm is shown to converge at an order of magnitude faster than known
algorithms optimizing the same objective, and have outperforms prior subspace
clustering methods in accuracy and running time for MNIST dataset.
| Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin | null | 1503.01578 | null | null |
Large-Scale Distributed Bayesian Matrix Factorization using Stochastic
Gradient MCMC | cs.LG stat.ML | Despite having various attractive qualities such as high prediction accuracy
and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix
Factorization has not been widely adopted because of the prohibitive cost of
inference. In this paper, we propose a scalable distributed Bayesian matrix
factorization algorithm using stochastic gradient MCMC. Our algorithm, based on
Distributed Stochastic Gradient Langevin Dynamics, can not only match the
prediction accuracy of standard MCMC methods like Gibbs sampling, but at the
same time is as fast and simple as stochastic gradient descent. In our
experiments, we show that our algorithm can achieve the same level of
prediction accuracy as Gibbs sampling an order of magnitude faster. We also
show that our method reduces the prediction error as fast as distributed
stochastic gradient descent, achieving a 4.1% improvement in RMSE for the
Netflix dataset and an 1.8% for the Yahoo music dataset.
| Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling | null | 1503.01596 | null | null |
High Dimensional Bayesian Optimisation and Bandits via Additive Models | stat.ML cs.LG | Bayesian Optimisation (BO) is a technique used in optimising a
$D$-dimensional function which is typically expensive to evaluate. While there
have been many successes for BO in low dimensions, scaling it to high
dimensions has been notoriously difficult. Existing literature on the topic are
under very restrictive settings. In this paper, we identify two key challenges
in this endeavour. We tackle these challenges by assuming an additive structure
for the function. This setting is substantially more expressive and contains a
richer class of functions than previous work. We prove that, for additive
functions the regret has only linear dependence on $D$ even though the function
depends on all $D$ dimensions. We also demonstrate several other statistical
and computational benefits in our framework. Via synthetic examples, a
scientific simulation and a face detection problem we demonstrate that our
method outperforms naive BO on additive functions and on several examples where
the function is not additive.
| Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos | null | 1503.01673 | null | null |
Min-Max Kernels | stat.ML cs.LG stat.CO | The min-max kernel is a generalization of the popular resemblance kernel
(which is designed for binary data). In this paper, we demonstrate, through an
extensive classification study using kernel machines, that the min-max kernel
often provides an effective measure of similarity for nonnegative data. As the
min-max kernel is nonlinear and might be difficult to be used for industrial
applications with massive data, we show that the min-max kernel can be
linearized via hashing techniques. This allows practitioners to apply min-max
kernel to large-scale applications using well matured linear algorithms such as
linear SVM or logistic regression.
The previous remarkable work on consistent weighted sampling (CWS) produces
samples in the form of ($i^*, t^*$) where the $i^*$ records the location (and
in fact also the weights) information analogous to the samples produced by
classical minwise hashing on binary data. Because the $t^*$ is theoretically
unbounded, it was not immediately clear how to effectively implement CWS for
building large-scale linear classifiers. In this paper, we provide a simple
solution by discarding $t^*$ (which we refer to as the "0-bit" scheme). Via an
extensive empirical study, we show that this 0-bit scheme does not lose
essential information. We then apply the "0-bit" CWS for building linear
classifiers to approximate min-max kernel classifiers, as extensively validated
on a wide range of publicly available classification datasets. We expect this
work will generate interests among data mining practitioners who would like to
efficiently utilize the nonlinear information of non-binary and nonnegative
data.
| Ping Li | null | 1503.01737 | null | null |
Correct-by-synthesis reinforcement learning with temporal logic
constraints | cs.LO cs.GT cs.LG cs.SY | We consider a problem on the synthesis of reactive controllers that optimize
some a priori unknown performance criterion while interacting with an
uncontrolled environment such that the system satisfies a given temporal logic
specification. We decouple the problem into two subproblems. First, we extract
a (maximally) permissive strategy for the system, which encodes multiple
(possibly all) ways in which the system can react to the adversarial
environment and satisfy the specifications. Then, we quantify the a priori
unknown performance criterion as a (still unknown) reward function and compute
an optimal strategy for the system within the operating envelope allowed by the
permissive strategy by using the so-called maximin-Q learning algorithm. We
establish both correctness (with respect to the temporal logic specifications)
and optimality (with respect to the a priori unknown performance criterion) of
this two-step technique for a fragment of temporal logic specifications. For
specifications beyond this fragment, correctness can still be preserved, but
the learned strategy may be sub-optimal. We present an algorithm to the overall
problem, and demonstrate its use and computational requirements on a set of
robot motion planning examples.
| Min Wen, Ruediger Ehlers, Ufuk Topcu | null | 1503.01793 | null | null |
EmoNets: Multimodal deep learning approaches for emotion recognition in
video | cs.LG cs.CV | The task of the emotion recognition in the wild (EmotiW) Challenge is to
assign one of seven emotions to short video clips extracted from Hollywood
style movies. The videos depict acted-out emotions under realistic conditions
with a large degree of variation in attributes such as pose and illumination,
making it worthwhile to explore approaches which consider combinations of
features from multiple modalities for label assignment. In this paper we
present our approach to learning several specialist models using deep learning
techniques, each focusing on one modality. Among these are a convolutional
neural network, focusing on capturing visual information in detected faces, a
deep belief net focusing on the representation of the audio stream, a K-Means
based "bag-of-mouths" model, which extracts visual features around the mouth
region and a relational autoencoder, which addresses spatio-temporal aspects of
videos. We explore multiple methods for the combination of cues from these
modalities into one common classifier. This achieves a considerably greater
accuracy than predictions from our strongest single-modality classifier. Our
method was the winning submission in the 2013 EmotiW challenge and achieved a
test set accuracy of 47.67% on the 2014 dataset.
| Samira Ebrahimi Kahou, Xavier Bouthillier, Pascal Lamblin, Caglar
Gulcehre, Vincent Michalski, Kishore Konda, S\'ebastien Jean, Pierre
Froumenty, Yann Dauphin, Nicolas Boulanger-Lewandowski, Raul Chandias
Ferrari, Mehdi Mirza, David Warde-Farley, Aaron Courville, Pascal Vincent,
Roland Memisevic, Christopher Pal, Yoshua Bengio | null | 1503.01800 | null | null |
Optimally Combining Classifiers Using Unlabeled Data | cs.LG stat.ML | We develop a worst-case analysis of aggregation of classifier ensembles for
binary classification. The task of predicting to minimize error is formulated
as a game played over a given set of unlabeled data (a transductive setting),
where prior label information is encoded as constraints on the game. The
minimax solution of this game identifies cases where a weighted combination of
the classifiers can perform significantly better than any single classifier.
| Akshay Balsubramani, Yoav Freund | null | 1503.01811 | null | null |
Latent Hierarchical Model for Activity Recognition | cs.RO cs.AI cs.CV cs.LG | We present a novel hierarchical model for human activity recognition. In
contrast to approaches that successively recognize actions and activities, our
approach jointly models actions and activities in a unified framework, and
their labels are simultaneously predicted. The model is embedded with a latent
layer that is able to capture a richer class of contextual information in both
state-state and observation-state pairs. Although loops are present in the
model, the model has an overall linear-chain structure, where the exact
inference is tractable. Therefore, the model is very efficient in both
inference and learning. The parameters of the graphical model are learned with
a Structured Support Vector Machine (Structured-SVM). A data-driven approach is
used to initialize the latent variables; therefore, no manual labeling for the
latent states is required. The experimental results from using two benchmark
datasets show that our model outperforms the state-of-the-art approach, and our
model is computationally more efficient.
| Ninghang Hu, Gwenn Englebienne, Zhongyu Lou, and Ben Kr\"ose | null | 1503.01820 | null | null |
Deep Clustered Convolutional Kernels | cs.LG cs.NE | Deep neural networks have recently achieved state of the art performance
thanks to new training algorithms for rapid parameter estimation and new
regularization methods to reduce overfitting. However, in practice the network
architecture has to be manually set by domain experts, generally by a costly
trial and error procedure, which often accounts for a large portion of the
final system performance. We view this as a limitation and propose a novel
training algorithm that automatically optimizes network architecture, by
progressively increasing model complexity and then eliminating model redundancy
by selectively removing parameters at training time. For convolutional neural
networks, our method relies on iterative split/merge clustering of
convolutional kernels interleaved by stochastic gradient descent. We present a
training algorithm and experimental results on three different vision tasks,
showing improved performance compared to similarly sized hand-crafted
architectures.
| Minyoung Kim, Luca Rigazio | null | 1503.01824 | null | null |
Encoding Source Language with Convolutional Neural Network for Machine
Translation | cs.CL cs.LG cs.NE | The recently proposed neural network joint model (NNJM) (Devlin et al., 2014)
augments the n-gram target language model with a heuristically chosen source
context window, achieving state-of-the-art performance in SMT. In this paper,
we give a more systematic treatment by summarizing the relevant source
information through a convolutional architecture guided by the target
information. With different guiding signals during decoding, our specifically
designed convolution+gating architectures can pinpoint the parts of a source
sentence that are relevant to predicting a target word, and fuse them with the
context of entire source sentence to form a unified representation. This
representation, together with target language words, are fed to a deep neural
network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English
translation tasks show that the proposed model can achieve significant
improvements over the previous NNJM by up to +1.08 BLEU points on average
| Fandong Meng and Zhengdong Lu and Mingxuan Wang and Hang Li and Wenbin
Jiang and Qun Liu | null | 1503.01838 | null | null |
Ranking and significance of variable-length similarity-based time series
motifs | cs.LG | The detection of very similar patterns in a time series, commonly called
motifs, has received continuous and increasing attention from diverse
scientific communities. In particular, recent approaches for discovering
similar motifs of different lengths have been proposed. In this work, we show
that such variable-length similarity-based motifs cannot be directly compared,
and hence ranked, by their normalized dissimilarities. Specifically, we find
that length-normalized motif dissimilarities still have intrinsic dependencies
on the motif length, and that lowest dissimilarities are particularly affected
by this dependency. Moreover, we find that such dependencies are generally
non-linear and change with the considered data set and dissimilarity measure.
Based on these findings, we propose a solution to rank those motifs and measure
their significance. This solution relies on a compact but accurate model of the
dissimilarity space, using a beta distribution with three parameters that
depend on the motif length in a non-linear way. We believe the incomparability
of variable-length dissimilarities could go beyond the field of time series,
and that similar modeling strategies as the one used here could be of help in a
more broad context.
| Joan Serr\`a, Isabel Serra, \'Alvaro Corral and Josep Lluis Arcos | 10.1016/j.eswa.2016.02.026 | 1503.01883 | null | null |
Sequential Relevance Maximization with Binary Feedback | cs.LG cs.AI | Motivated by online settings where users can provide explicit feedback about
the relevance of products that are sequentially presented to them, we look at
the recommendation process as a problem of dynamically optimizing this
relevance feedback. Such an algorithm optimizes the fine tradeoff between
presenting the products that are most likely to be relevant, and learning the
preferences of the user so that more relevant recommendations can be made in
the future.
We assume a standard predictive model inspired by collaborative filtering, in
which a user is sampled from a distribution over a set of possible types. For
every product category, each type has an associated relevance feedback that is
assumed to be binary: the category is either relevant or irrelevant. Assuming
that the user stays for each additional recommendation opportunity with
probability $\beta$ independent of the past, the problem is to find a policy
that maximizes the expected number of recommendations that are deemed relevant
in a session.
We analyze this problem and prove key structural properties of the optimal
policy. Based on these properties, we first present an algorithm that strikes a
balance between recursion and dynamic programming to compute this policy. We
further propose and analyze two heuristic policies: a `farsighted' greedy
policy that attains at least $1-\beta$ factor of the optimal payoff, and a
naive greedy policy that attains at least $\frac{1-\beta}{1+\beta}$ factor of
the optimal payoff in the worst case. Extensive simulations show that these
heuristics are very close to optimal in practice.
| Vijay Kamble, Nadia Fawaz, Fernando Silveira | null | 1503.01910 | null | null |
Hamiltonian ABC | stat.ML cs.LG q-bio.QM | Approximate Bayesian computation (ABC) is a powerful and elegant framework
for performing inference in simulation-based models. However, due to the
difficulty in scaling likelihood estimates, ABC remains useful for relatively
low-dimensional problems. We introduce Hamiltonian ABC (HABC), a set of
likelihood-free algorithms that apply recent advances in scaling Bayesian
learning using Hamiltonian Monte Carlo (HMC) and stochastic gradients. We find
that a small number forward simulations can effectively approximate the ABC
gradient, allowing Hamiltonian dynamics to efficiently traverse parameter
spaces. We also describe a new simple yet general approach of incorporating
random seeds into the state of the Markov chain, further reducing the random
walk behavior of HABC. We demonstrate HABC on several typical ABC problems, and
show that HABC samples comparably to regular Bayesian inference using true
gradients on a high-dimensional problem from machine learning.
| Edward Meeds, Robert Leenders, and Max Welling | null | 1503.01916 | null | null |
To Drop or Not to Drop: Robustness, Consistency and Differential Privacy
Properties of Dropout | cs.LG cs.NE stat.ML | Training deep belief networks (DBNs) requires optimizing a non-convex
function with an extremely large number of parameters. Naturally, existing
gradient descent (GD) based methods are prone to arbitrarily poor local minima.
In this paper, we rigorously show that such local minima can be avoided (upto
an approximation error) by using the dropout technique, a widely used heuristic
in this domain. In particular, we show that by randomly dropping a few nodes of
a one-hidden layer neural network, the training objective function, up to a
certain approximation error, decreases by a multiplicative factor.
On the flip side, we show that for training convex empirical risk minimizers
(ERM), dropout in fact acts as a "stabilizer" or regularizer. That is, a simple
dropout based GD method for convex ERMs is stable in the face of arbitrary
changes to any one of the training points. Using the above assertion, we show
that dropout provides fast rates for generalization error in learning (convex)
generalized linear models (GLM). Moreover, using the above mentioned stability
properties of dropout, we design dropout based differentially private
algorithms for solving ERMs. The learned GLM thus, preserves privacy of each of
the individual training points while providing accurate predictions for new
test points. Finally, we empirically validate our stability assertions for
dropout in the context of convex ERMs and show that surprisingly, dropout
significantly outperforms (in terms of prediction accuracy) the L2
regularization based methods for several benchmark datasets.
| Prateek Jain, Vivek Kulkarni, Abhradeep Thakurta, Oliver Williams | null | 1503.02031 | null | null |
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
Decomposition | cs.LG math.OC stat.ML | We analyze stochastic gradient descent for optimizing non-convex functions.
In many cases for non-convex functions the goal is to find a reasonable local
minimum, and the main concern is that gradient updates are trapped in saddle
points. In this paper we identify strict saddle property for non-convex problem
that allows for efficient optimization. Using this property we show that
stochastic gradient descent converges to a local minimum in a polynomial number
of iterations. To the best of our knowledge this is the first work that gives
global convergence guarantees for stochastic gradient descent on non-convex
functions with exponentially many local minima and saddle points. Our analysis
can be applied to orthogonal tensor decomposition, which is widely used in
learning a rich class of latent variable models. We propose a new optimization
formulation for the tensor decomposition problem that has strict saddle
property. As a result we get the first online algorithm for orthogonal tensor
decomposition with global convergence guarantee.
| Rong Ge, Furong Huang, Chi Jin, Yang Yuan | null | 1503.02101 | null | null |
Maximum a Posteriori Adaptation of Network Parameters in Deep Models | cs.LG cs.CL cs.NE | We present a Bayesian approach to adapting parameters of a well-trained
context-dependent, deep-neural-network, hidden Markov model (CD-DNN-HMM) to
improve automatic speech recognition performance. Given an abundance of DNN
parameters but with only a limited amount of data, the effectiveness of the
adapted DNN model can often be compromised. We formulate maximum a posteriori
(MAP) adaptation of parameters of a specially designed CD-DNN-HMM with an
augmented linear hidden networks connected to the output tied states, or
senones, and compare it to feature space MAP linear regression previously
proposed. Experimental evidences on the 20,000-word open vocabulary Wall Street
Journal task demonstrate the feasibility of the proposed framework. In
supervised adaptation, the proposed MAP adaptation approach provides more than
10% relative error reduction and consistently outperforms the conventional
transformation based methods. Furthermore, we present an initial attempt to
generate hierarchical priors to improve adaptation efficiency and effectiveness
with limited adaptation data by exploiting similarities among senones.
| Zhen Huang, Sabato Marco Siniscalchi, I-Fan Chen, Jiadong Wu, and
Chin-Hui Lee | null | 1503.02108 | null | null |
Exact Hybrid Covariance Thresholding for Joint Graphical Lasso | cs.LG cs.AI stat.ML | This paper considers the problem of estimating multiple related Gaussian
graphical models from a $p$-dimensional dataset consisting of different
classes. Our work is based upon the formulation of this problem as group
graphical lasso. This paper proposes a novel hybrid covariance thresholding
algorithm that can effectively identify zero entries in the precision matrices
and split a large joint graphical lasso problem into small subproblems. Our
hybrid covariance thresholding method is superior to existing uniform
thresholding methods in that our method can split the precision matrix of each
individual class using different partition schemes and thus split group
graphical lasso into much smaller subproblems, each of which can be solved very
fast. In addition, this paper establishes necessary and sufficient conditions
for our hybrid covariance thresholding algorithm. The superior performance of
our thresholding method is thoroughly analyzed and illustrated by a few
experiments on simulated data and real gene expression data.
| Qingming Tang, Chao Yang, Jian Peng and Jinbo Xu | null | 1503.02128 | null | null |
Learning Scale-Free Networks by Dynamic Node-Specific Degree Prior | cs.LG cs.AI stat.ML | Learning the network structure underlying data is an important problem in
machine learning. This paper introduces a novel prior to study the inference of
scale-free networks, which are widely used to model social and biological
networks. The prior not only favors a desirable global node degree
distribution, but also takes into consideration the relative strength of all
the possible edges adjacent to the same node and the estimated degree of each
individual node.
To fulfill this, ranking is incorporated into the prior, which makes the
problem challenging to solve. We employ an ADMM (alternating direction method
of multipliers) framework to solve the Gaussian Graphical model regularized by
this prior. Our experiments on both synthetic and real data show that our prior
not only yields a scale-free network, but also produces many more correctly
predicted edges than the others such as the scale-free inducing prior, the
hub-inducing prior and the $l_1$ norm.
| Qingming Tang, Siqi Sun, and Jinbo Xu | null | 1503.02129 | null | null |
Model selection of polynomial kernel regression | cs.LG | Polynomial kernel regression is one of the standard and state-of-the-art
learning strategies. However, as is well known, the choices of the degree of
polynomial kernel and the regularization parameter are still open in the realm
of model selection. The first aim of this paper is to develop a strategy to
select these parameters. On one hand, based on the worst-case learning rate
analysis, we show that the regularization term in polynomial kernel regression
is not necessary. In other words, the regularization parameter can decrease
arbitrarily fast when the degree of the polynomial kernel is suitable tuned. On
the other hand,taking account of the implementation of the algorithm, the
regularization term is required. Summarily, the effect of the regularization
term in polynomial kernel regression is only to circumvent the " ill-condition"
of the kernel matrix. Based on this, the second purpose of this paper is to
propose a new model selection strategy, and then design an efficient learning
algorithm. Both theoretical and experimental analysis show that the new
strategy outperforms the previous one. Theoretically, we prove that the new
learning strategy is almost optimal if the regression function is smooth.
Experimentally, it is shown that the new strategy can significantly reduce the
computational burden without loss of generalization capability.
| Shaobo Lin, Xingping Sun, Zongben Xu, Jinshan Zeng | null | 1503.02143 | null | null |
Sparse Bayesian Dictionary Learning with a Gaussian Hierarchical Model | cs.LG cs.IT math.IT | We consider a dictionary learning problem whose objective is to design a
dictionary such that the signals admits a sparse or an approximate sparse
representation over the learned dictionary. Such a problem finds a variety of
applications such as image denoising, feature extraction, etc. In this paper,
we propose a new hierarchical Bayesian model for dictionary learning, in which
a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of
the representation. Suitable priors are also placed on the dictionary and the
noise variance such that they can be reasonably inferred from the data. Based
on the hierarchical model, a variational Bayesian method and a Gibbs sampling
method are developed for Bayesian inference. The proposed methods have the
advantage that they do not require the knowledge of the noise variance \emph{a
priori}. Numerical results show that the proposed methods are able to learn the
dictionary with an accuracy better than existing methods, particularly for the
case where there is a limited number of training signals.
| Linxiao Yang, Jun Fang, Hong Cheng, and Hongbin Li | null | 1503.02144 | null | null |
A Nonconvex Approach for Structured Sparse Learning | cs.IT cs.LG math.IT | Sparse learning is an important topic in many areas such as machine learning,
statistical estimation, signal processing, etc. Recently, there emerges a
growing interest on structured sparse learning. In this paper we focus on the
$\ell_q$-analysis optimization problem for structured sparse learning ($0< q
\leq 1$). Compared to previous work, we establish weaker conditions for exact
recovery in noiseless case and a tighter non-asymptotic upper bound of estimate
error in noisy case. We further prove that the nonconvex $\ell_q$-analysis
optimization can do recovery with a lower sample complexity and in a wider
range of cosparsity than its convex counterpart. In addition, we develop an
iteratively reweighted method to solve the optimization problem under the
variational framework. Theoretical analysis shows that our method is capable of
pursuing a local minima close to the global minima. Also, empirical results of
preliminary computational experiments illustrate that our nonconvex method
outperforms both its convex counterpart and other state-of-the-art methods.
| Shubao Zhang and Hui Qian and Zhihua Zhang | null | 1503.02164 | null | null |
Label optimal regret bounds for online local learning | cs.LG | We resolve an open question from (Christiano, 2014b) posed in COLT'14
regarding the optimal dependency of the regret achievable for online local
learning on the size of the label set. In this framework the algorithm is shown
a pair of items at each step, chosen from a set of $n$ items. The learner then
predicts a label for each item, from a label set of size $L$ and receives a
real valued payoff. This is a natural framework which captures many interesting
scenarios such as collaborative filtering, online gambling, and online max cut
among others. (Christiano, 2014a) designed an efficient online learning
algorithm for this problem achieving a regret of $O(\sqrt{nL^3T})$, where $T$
is the number of rounds. Information theoretically, one can achieve a regret of
$O(\sqrt{n \log L T})$. One of the main open questions left in this framework
concerns closing the above gap.
In this work, we provide a complete answer to the question above via two main
results. We show, via a tighter analysis, that the semi-definite programming
based algorithm of (Christiano, 2014a), in fact achieves a regret of
$O(\sqrt{nLT})$. Second, we show a matching computational lower bound. Namely,
we show that a polynomial time algorithm for online local learning with lower
regret would imply a polynomial time algorithm for the planted clique problem
which is widely believed to be hard. We prove a similar hardness result under a
related conjecture concerning planted dense subgraphs that we put forth. Unlike
planted clique, the planted dense subgraph problem does not have any known
quasi-polynomial time algorithms.
Computational lower bounds for online learning are relatively rare, and we
hope that the ideas developed in this work will lead to lower bounds for other
online learning scenarios as well.
| Pranjal Awasthi, Moses Charikar, Kevin A. Lai, Andrej Risteski | null | 1503.02193 | null | null |
Higher order Matching Pursuit for Low Rank Tensor Learning | stat.ML cs.LG math.OC | Low rank tensor learning, such as tensor completion and multilinear multitask
learning, has received much attention in recent years. In this paper, we
propose higher order matching pursuit for low rank tensor learning problems
with a convex or a nonconvex cost function, which is a generalization of the
matching pursuit type methods. At each iteration, the main cost of the proposed
methods is only to compute a rank-one tensor, which can be done efficiently,
making the proposed methods scalable to large scale problems. Moreover, storing
the resulting rank-one tensors is of low storage requirement, which can help to
break the curse of dimensionality. The linear convergence rate of the proposed
methods is established in various circumstances. Along with the main methods,
we also provide a method of low computational complexity for approximately
computing the rank-one tensors, with provable approximation ratio, which helps
to improve the efficiency of the main methods and to analyze the convergence
rate. Experimental results on synthetic as well as real datasets verify the
efficiency and effectiveness of the proposed methods.
| Yuning Yang, Siamak Mehrkanoon and Johan A.K. Suykens | null | 1503.02216 | null | null |
Financial Market Prediction | cs.CE cs.LG | Given financial data from popular sites like Yahoo and the London Exchange,
the presented paper attempts to model and predict stocks that can be considered
"good investments". Stocks are characterized by 125 features ranging from gross
domestic product to EDIBTA, and are labeled by discrepancies between stock and
market price returns. An artificial neural network (Self-Organizing Map) is
fitted to train on more than a million data points to predict "good
investments" given testing stocks from 2013 and after.
| Mike Wu | null | 1503.02328 | null | null |
One Scan 1-Bit Compressed Sensing | stat.ME cs.IT cs.LG math.IT | Based on $\alpha$-stable random projections with small $\alpha$, we develop a
simple algorithm for compressed sensing (sparse signal recovery) by utilizing
only the signs (i.e., 1-bit) of the measurements. Using only 1-bit information
of the measurements results in substantial cost reduction in collection,
storage, communication, and decoding for compressed sensing. The proposed
algorithm is efficient in that the decoding procedure requires only one scan of
the coordinates. Our analysis can precisely show that, for a $K$-sparse signal
of length $N$, $12.3K\log N/\delta$ measurements (where $\delta$ is the
confidence) would be sufficient for recovering the support and the signs of the
signal. While the method is very robust against typical measurement noises, we
also provide the analysis of the scheme under random flipping of the signs of
the measurements.
\noindent Compared to the well-known work on 1-bit marginal regression (which
can also be viewed as a one-scan method), the proposed algorithm requires
orders of magnitude fewer measurements. Compared to 1-bit Iterative Hard
Thresholding (IHT) (which is not a one-scan algorithm), our method is still
significantly more accurate. Furthermore, the proposed method is reasonably
robust against random sign flipping while IHT is known to be very sensitive to
this type of noise.
| Ping Li | null | 1503.02346 | null | null |
Fully Connected Deep Structured Networks | cs.CV cs.LG | Convolutional neural networks with many layers have recently been shown to
achieve excellent results on many high-level tasks such as image
classification, object detection and more recently also semantic segmentation.
Particularly for semantic segmentation, a two-stage procedure is often
employed. Hereby, convolutional networks are trained to provide good local
pixel-wise features for the second step being traditionally a more global
graphical model. In this work we unify this two-stage process into a single
joint training algorithm. We demonstrate our method on the semantic image
segmentation task and show encouraging results on the challenging PASCAL VOC
2012 dataset.
| Alexander G. Schwing and Raquel Urtasun | null | 1503.02351 | null | null |
Context-Dependent Translation Selection Using Convolutional Neural
Network | cs.CL cs.LG cs.NE | We propose a novel method for translation selection in statistical machine
translation, in which a convolutional neural network is employed to judge the
similarity between a phrase pair in two languages. The specifically designed
convolutional architecture encodes not only the semantic similarity of the
translation pair, but also the context containing the phrase in the source
language. Therefore, our approach is able to capture context-dependent semantic
similarities of translation pairs. We adopt a curriculum learning strategy to
train the model: we classify the training examples into easy, medium, and
difficult categories, and gradually build the ability of representing phrase
and sentence level context by using training examples from easy to difficult.
Experimental results show that our approach significantly outperforms the
baseline system by up to 1.4 BLEU points.
| Zhaopeng Tu, Baotian Hu, Zhengdong Lu, and Hang Li | null | 1503.02357 | null | null |
Learning Co-Sparse Analysis Operators with Separable Structures | cs.LG stat.ML | In the co-sparse analysis model a set of filters is applied to a signal out
of the signal class of interest yielding sparse filter responses. As such, it
may serve as a prior in inverse problems, or for structural analysis of signals
that are known to belong to the signal class. The more the model is adapted to
the class, the more reliable it is for these purposes. The task of learning
such operators for a given class is therefore a crucial problem. In many
applications, it is also required that the filter responses are obtained in a
timely manner, which can be achieved by filters with a separable structure. Not
only can operators of this sort be efficiently used for computing the filter
responses, but they also have the advantage that less training samples are
required to obtain a reliable estimate of the operator. The first contribution
of this work is to give theoretical evidence for this claim by providing an
upper bound for the sample complexity of the learning process. The second is a
stochastic gradient descent (SGD) method designed to learn an analysis operator
with separable structures, which includes a novel and efficient step size
selection rule. Numerical experiments are provided that link the sample
complexity to the convergence speed of the SGD algorithm.
| Matthias Seibert, Julian W\"ormann, R\'emi Gribonval, Martin
Kleinsteuber | 10.1109/TSP.2015.2481875 | 1503.02398 | null | null |
Deep Learning and the Information Bottleneck Principle | cs.LG | Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the
information bottleneck (IB) principle. We first show that any DNN can be
quantified by the mutual information between the layers and the input and
output variables. Using this representation we can calculate the optimal
information theoretic limits of the DNN and obtain finite sample generalization
bounds. The advantage of getting closer to the theoretical limit is
quantifiable both by the generalization bound and by the network's simplicity.
We argue that both the optimal architecture, number of layers and
features/connections at each layer, are related to the bifurcation points of
the information bottleneck tradeoff, namely, relevant compression of the input
layer with respect to the output layer. The hierarchical representations at the
layered network naturally correspond to the structural phase transitions along
the information curve. We believe that this new insight can lead to new
optimality bounds and deep learning algorithms.
| Naftali Tishby and Noga Zaslavsky | null | 1503.02406 | null | null |
Structured Prediction of Sequences and Trees using Infinite Contexts | cs.LG cs.CL | Linguistic structures exhibit a rich array of global phenomena, however
commonly used Markov models are unable to adequately describe these phenomena
due to their strong locality assumptions. We propose a novel hierarchical model
for structured prediction over sequences and trees which exploits global
context by conditioning each generation decision on an unbounded context of
prior decisions. This builds on the success of Markov models but without
imposing a fixed bound in order to better represent global phenomena. To
facilitate learning of this large and unbounded model, we use a hierarchical
Pitman-Yor process prior which provides a recursive form of smoothing. We
propose prediction algorithms based on A* and Markov Chain Monte Carlo
sampling. Empirical results demonstrate the potential of our model compared to
baseline finite-context Markov models on part-of-speech tagging and syntactic
parsing.
| Ehsan Shareghi, Gholamreza Haffari, Trevor Cohn, Ann Nicholson | null | 1503.02417 | null | null |
Syntax-based Deep Matching of Short Texts | cs.CL cs.LG cs.NE | Many tasks in natural language processing, ranging from machine translation
to question answering, can be reduced to the problem of matching two sentences
or more generally two short texts. We propose a new approach to the problem,
called Deep Match Tree (DeepMatch$_{tree}$), under a general setting. The
approach consists of two components, 1) a mining algorithm to discover patterns
for matching two short-texts, defined in the product space of dependency trees,
and 2) a deep neural network for matching short texts using the mined patterns,
as well as a learning algorithm to build the network having a sparse structure.
We test our algorithm on the problem of matching a tweet and a response in
social media, a hard matching problem proposed in [Wang et al., 2013], and show
that DeepMatch$_{tree}$ can outperform a number of competitor models including
one without using dependency trees and one based on word-embedding, all with
large margins
| Mingxuan Wang and Zhengdong Lu and Hang Li and Qun Liu | null | 1503.02427 | null | null |
Compositional Distributional Semantics with Long Short Term Memory | cs.CL cs.AI cs.LG | We are proposing an extension of the recursive neural network that makes use
of a variant of the long short-term memory architecture. The extension allows
information low in parse trees to be stored in a memory register (the `memory
cell') and used much later higher up in the parse tree. This provides a
solution to the vanishing gradient problem and allows the network to capture
long range dependencies. Experimental results show that our composition
outperformed the traditional neural-network composition on the Stanford
Sentiment Treebank.
| Phong Le and Willem Zuidema | null | 1503.02510 | null | null |
Distilling the Knowledge in a Neural Network | stat.ML cs.LG cs.NE | A very simple way to improve the performance of almost any machine learning
algorithm is to train many different models on the same data and then to
average their predictions. Unfortunately, making predictions using a whole
ensemble of models is cumbersome and may be too computationally expensive to
allow deployment to a large number of users, especially if the individual
models are large neural nets. Caruana and his collaborators have shown that it
is possible to compress the knowledge in an ensemble into a single model which
is much easier to deploy and we develop this approach further using a different
compression technique. We achieve some surprising results on MNIST and we show
that we can significantly improve the acoustic model of a heavily used
commercial system by distilling the knowledge in an ensemble of models into a
single model. We also introduce a new type of ensemble composed of one or more
full models and many specialist models which learn to distinguish fine-grained
classes that the full models confuse. Unlike a mixture of experts, these
specialist models can be trained rapidly and in parallel.
| Geoffrey Hinton, Oriol Vinyals, Jeff Dean | null | 1503.02531 | null | null |
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation
Messages | stat.ML cs.LG | We propose an efficient nonparametric strategy for learning a message
operator in expectation propagation (EP), which takes as input the set of
incoming messages to a factor node, and produces an outgoing message as output.
This learned operator replaces the multivariate integral required in classical
EP, which may not have an analytic expression. We use kernel-based regression,
which is trained on a set of probability distributions representing the
incoming messages, and the associated outgoing messages. The kernel approach
has two main advantages: first, it is fast, as it is implemented using a novel
two-layer random feature representation of the input message distributions;
second, it has principled uncertainty estimates, and can be cheaply updated
online, meaning it can request and incorporate new training data when it
encounters inputs on which it is uncertain. In experiments, our approach is
able to solve learning problems where a single message operator is required for
multiple, substantially different data sets (logistic regression for a variety
of classification problems), where it is essential to accurately assess
uncertainty and to efficiently and robustly update the message operator.
| Wittawat Jitkrittum, Arthur Gretton, Nicolas Heess, S. M. Ali Eslami,
Balaji Lakshminarayanan, Dino Sejdinovic, Zolt\'an Szab\'o | null | 1503.02551 | null | null |
Modeling State-Conditional Observation Distribution using Weighted
Stereo Samples for Factorial Speech Processing Models | cs.LG cs.AI cs.SD | This paper investigates the effectiveness of factorial speech processing
models in noise-robust automatic speech recognition tasks. For this purpose,
the paper proposes an idealistic approach for modeling state-conditional
observation distribution of factorial models based on weighted stereo samples.
This approach is an extension to previous single pass retraining for ideal
model compensation which is extended here to support multiple audio sources.
Non-stationary noises can be considered as one of these audio sources with
multiple states. Experiments of this paper over the set A of the Aurora 2
dataset show that recognition performance can be improved by this
consideration. The improvement is significant in low signal to noise energy
conditions, up to 4% absolute word recognition accuracy. In addition to the
power of the proposed method in accurate representation of state-conditional
observation distribution, it has an important advantage over previous methods
by providing the opportunity to independently select feature spaces for both
source and corrupted features. This opens a new window for seeking better
feature spaces appropriate for noisy speech, independent from clean speech
features.
| Mahdi Khademian, Mohammad Mehdi Homayounpour | 10.1007/s00034-016-0310-y | 1503.02578 | null | null |
A Characterization of Deterministic Sampling Patterns for Low-Rank
Matrix Completion | stat.ML cs.LG math.AG | Low-rank matrix completion (LRMC) problems arise in a wide variety of
applications. Previous theory mainly provides conditions for completion under
missing-at-random samplings. This paper studies deterministic conditions for
completion. An incomplete $d \times N$ matrix is finitely rank-$r$ completable
if there are at most finitely many rank-$r$ matrices that agree with all its
observed entries. Finite completability is the tipping point in LRMC, as a few
additional samples of a finitely completable matrix guarantee its unique
completability. The main contribution of this paper is a deterministic sampling
condition for finite completability. We use this to also derive deterministic
sampling conditions for unique completability that can be efficiently verified.
We also show that under uniform random sampling schemes, these conditions are
satisfied with high probability if $O(\max\{r,\log d\})$ entries per column are
observed. These findings have several implications on LRMC regarding lower
bounds, sample and computational complexity, the role of coherence, adaptive
settings and the validation of any completion algorithm. We complement our
theoretical results with experiments that support our findings and motivate
future analysis of uncharted sampling regimes.
| Daniel L. Pimentel-Alarc\'on, Nigel Boston, Robert D. Nowak | 10.1109/JSTSP.2016.2537145 | 1503.02596 | null | null |
An Adaptive Online HDP-HMM for Segmentation and Classification of
Sequential Data | stat.ML cs.LG | In the recent years, the desire and need to understand sequential data has
been increasing, with particular interest in sequential contexts such as
patient monitoring, understanding daily activities, video surveillance, stock
market and the like. Along with the constant flow of data, it is critical to
classify and segment the observations on-the-fly, without being limited to a
rigid number of classes. In addition, the model needs to be capable of updating
its parameters to comply with possible evolutions. This interesting problem,
however, is not adequately addressed in the literature since many studies focus
on offline classification over a pre-defined class set. In this paper, we
propose a principled solution to this gap by introducing an adaptive online
system based on Markov switching models with hierarchical Dirichlet process
priors. This infinite adaptive online approach is capable of segmenting and
classifying the sequential data over unlimited number of classes, while meeting
the memory and delay constraints of streaming contexts. The model is further
enhanced by introducing a learning rate, responsible for balancing the extent
to which the model sustains its previous learning (parameters) or adapts to the
new streaming observations. Experimental results on several variants of
stationary and evolving synthetic data and two video datasets, TUM Assistive
Kitchen and collatedWeizmann, show remarkable performance in segmentation and
classification, particularly for evolutionary sequences with changing
distributions and/or containing new, unseen classes.
| Ava Bargi, Richard Yi Da Xu, Massimo Piccardi | null | 1503.02761 | null | null |
Scalable Nuclear-norm Minimization by Subspace Pursuit Proximal
Riemannian Gradient | cs.LG cs.NA | Nuclear-norm regularization plays a vital role in many learning tasks, such
as low-rank matrix recovery (MR), and low-rank representation (LRR). Solving
this problem directly can be computationally expensive due to the unknown rank
of variables or large-rank singular value decompositions (SVDs). To address
this, we propose a proximal Riemannian gradient (PRG) scheme which can
efficiently solve trace-norm regularized problems defined on real-algebraic
variety $\mMLr$ of real matrices of rank at most $r$. Based on PRG, we further
present a simple and novel subspace pursuit (SP) paradigm for general
trace-norm regularized problems without the explicit rank constraint $\mMLr$.
The proposed paradigm is very scalable by avoiding large-rank SVDs. Empirical
studies on several tasks, such as matrix completion and LRR based subspace
clustering, demonstrate the superiority of the proposed paradigms over existing
methods.
| Mingkui Tan and Shijie Xiao and Junbin Gao and Dong Xu and Anton Van
Den Hengel and Qinfeng Shi | null | 1503.02828 | null | null |
Single stream parallelization of generalized LSTM-like RNNs on a GPU | cs.NE cs.LG | Recurrent neural networks (RNNs) have shown outstanding performance on
processing sequence data. However, they suffer from long training time, which
demands parallel implementations of the training procedure. Parallelization of
the training algorithms for RNNs are very challenging because internal
recurrent paths form dependencies between two different time frames. In this
paper, we first propose a generalized graph-based RNN structure that covers the
most popular long short-term memory (LSTM) network. Then, we present a
parallelization approach that automatically explores parallelisms of arbitrary
RNNs by analyzing the graph structure. The experimental results show that the
proposed approach shows great speed-up even with a single training stream, and
further accelerates the training when combined with multiple parallel training
streams.
| Kyuyeon Hwang and Wonyong Sung | 10.1109/ICASSP.2015.7178129 | 1503.02852 | null | null |
apsis - Framework for Automated Optimization of Machine Learning Hyper
Parameters | cs.LG | The apsis toolkit presented in this paper provides a flexible framework for
hyperparameter optimization and includes both random search and a bayesian
optimizer. It is implemented in Python and its architecture features
adaptability to any desired machine learning code. It can easily be used with
common Python ML frameworks such as scikit-learn. Published under the MIT
License other researchers are heavily encouraged to check out the code,
contribute or raise any suggestions. The code can be found at
github.com/FrederikDiehl/apsis.
| Frederik Diehl, Andreas Jauch | null | 1503.02946 | null | null |
L_1-regularized Boltzmann machine learning using majorizer minimization | stat.ML cond-mat.dis-nn cs.LG | We propose an inference method to estimate sparse interactions and biases
according to Boltzmann machine learning. The basis of this method is $L_1$
regularization, which is often used in compressed sensing, a technique for
reconstructing sparse input signals from undersampled outputs. $L_1$
regularization impedes the simple application of the gradient method, which
optimizes the cost function that leads to accurate estimations, owing to the
cost function's lack of smoothness. In this study, we utilize the majorizer
minimization method, which is a well-known technique implemented in
optimization problems, to avoid the non-smoothness of the cost function. By
using the majorizer minimization method, we elucidate essentially relevant
biases and interactions from given data with seemingly strongly-correlated
components.
| Masayuki Ohzeki | 10.7566/JPSJ.84.054801 | 1503.03132 | null | null |
A Neurodynamical System for finding a Minimal VC Dimension Classifier | cs.LG stat.ML | The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane
classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC)
dimension. The VC dimension measures the capacity of a learning machine, and a
smaller VC dimension leads to improved generalization. On many benchmark
datasets, the MCM generalizes better than SVMs and uses far fewer support
vectors than the number used by SVMs. In this paper, we describe a neural
network based on a linear dynamical system, that converges to the MCM solution.
The proposed MCM dynamical system is conducive to an analogue circuit
implementation on a chip or simulation using Ordinary Differential Equation
(ODE) solvers. Numerical experiments on benchmark datasets from the UCI
repository show that the proposed approach is scalable and accurate, as we
obtain improved accuracies and fewer number of support vectors (upto 74.3%
reduction) with the MCM dynamical system.
| Jayadeva, Sumit Soman, Amit Bhaya | 10.1016/j.neunet.2020.08.013 | 1503.03148 | null | null |
Learning Classifiers from Synthetic Data Using a Multichannel
Autoencoder | cs.CV cs.LG | We propose a method for using synthetic data to help learning classifiers.
Synthetic data, even is generated based on real data, normally results in a
shift from the distribution of real data in feature space. To bridge the gap
between the real and synthetic data, and jointly learn from synthetic and real
data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by
suing MCAE, it is possible to learn a better feature representation for
classification. To evaluate the proposed approach, we conduct experiments on
two types of datasets. Experimental results on two datasets validate the
efficiency of our MCAE model and our methodology of generating synthetic data.
| Xi Zhang, Yanwei Fu, Andi Zang, Leonid Sigal, Gady Agam | null | 1503.03163 | null | null |
Deep Convolutional Inverse Graphics Network | cs.CV cs.GR cs.LG cs.NE | This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a
model that learns an interpretable representation of images. This
representation is disentangled with respect to transformations such as
out-of-plane rotations and lighting variations. The DC-IGN model is composed of
multiple layers of convolution and de-convolution operators and is trained
using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a
training procedure to encourage neurons in the graphics code layer to represent
a specific transformation (e.g. pose or light). Given a single input image, our
model can generate new images of the same object with variations in pose and
lighting. We present qualitative and quantitative results of the model's
efficacy at learning a 3D rendering engine.
| Tejas D. Kulkarni, Will Whitney, Pushmeet Kohli, Joshua B. Tenenbaum | null | 1503.03167 | null | null |
Scalable Discovery of Time-Series Shapelets | cs.LG | Time-series classification is an important problem for the data mining
community due to the wide range of application domains involving time-series
data. A recent paradigm, called shapelets, represents patterns that are highly
predictive for the target variable. Shapelets are discovered by measuring the
prediction accuracy of a set of potential (shapelet) candidates. The candidates
typically consist of all the segments of a dataset, therefore, the discovery of
shapelets is computationally expensive. This paper proposes a novel method that
avoids measuring the prediction accuracy of similar candidates in Euclidean
distance space, through an online clustering pruning technique. In addition,
our algorithm incorporates a supervised shapelet selection that filters out
only those candidates that improve classification accuracy. Empirical evidence
on 45 datasets from the UCR collection demonstrate that our method is 3-4
orders of magnitudes faster than the fastest existing shapelet-discovery
method, while providing better prediction accuracy.
| Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme | null | 1503.03238 | null | null |
Convolutional Neural Network Architectures for Matching Natural Language
Sentences | cs.CL cs.LG cs.NE | Semantic matching is of central importance to many natural language tasks
\cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to
adequately model the internal structures of language objects and the
interaction between them. As a step toward this goal, we propose convolutional
neural network models for matching two sentences, by adapting the convolutional
strategy in vision and speech. The proposed models not only nicely represent
the hierarchical structures of sentences with their layer-by-layer composition
and pooling, but also capture the rich matching patterns at different levels.
Our models are rather generic, requiring no prior knowledge on language, and
can hence be applied to matching tasks of different nature and in different
languages. The empirical study on a variety of matching tasks demonstrates the
efficacy of the proposed model on a variety of matching tasks and its
superiority to competitor models.
| Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen | null | 1503.03244 | null | null |
Automatic Unsupervised Tensor Mining with Quality Assessment | stat.ML cs.LG cs.NA stat.AP | A popular tool for unsupervised modelling and mining multi-aspect data is
tensor decomposition. In an exploratory setting, where and no labels or ground
truth are available how can we automatically decide how many components to
extract? How can we assess the quality of our results, so that a domain expert
can factor this quality measure in the interpretation of our results? In this
paper, we introduce AutoTen, a novel automatic unsupervised tensor mining
algorithm with minimal user intervention, which leverages and improves upon
heuristics that assess the result quality. We extensively evaluate AutoTen's
performance on synthetic data, outperforming existing baselines on this very
hard problem. Finally, we apply AutoTen on a variety of real datasets,
providing insights and discoveries. We view this work as a step towards a fully
automated, unsupervised tensor mining tool that can be easily adopted by
practitioners in academia and industry.
| Evangelos E. Papalexakis | null | 1503.03355 | null | null |
A mathematical motivation for complex-valued convolutional networks | cs.LG cs.NE stat.ML | A complex-valued convolutional network (convnet) implements the repeated
application of the following composition of three operations, recursively
applying the composition to an input vector of nonnegative real numbers: (1)
convolution with complex-valued vectors followed by (2) taking the absolute
value of every entry of the resulting vectors followed by (3) local averaging.
For processing real-valued random vectors, complex-valued convnets can be
viewed as "data-driven multiscale windowed power spectra," "data-driven
multiscale windowed absolute spectra," "data-driven multiwavelet absolute
values," or (in their most general configuration) "data-driven nonlinear
multiwavelet packets." Indeed, complex-valued convnets can calculate multiscale
windowed spectra when the convnet filters are windowed complex-valued
exponentials. Standard real-valued convnets, using rectified linear units
(ReLUs), sigmoidal (for example, logistic or tanh) nonlinearities, max.
pooling, etc., do not obviously exhibit the same exact correspondence with
data-driven wavelets (whereas for complex-valued convnets, the correspondence
is much more than just a vague analogy). Courtesy of the exact correspondence,
the remarkably rich and rigorous body of mathematical analysis for wavelets
applies directly to (complex-valued) convnets.
| Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, Arthur
Szlam, and Mark Tygert | null | 1503.03438 | null | null |
Estimating the Mean Number of K-Means Clusters to Form | cs.LG | Utilizing the sample size of a dataset, the random cluster model is employed
in order to derive an estimate of the mean number of K-Means clusters to form
during classification of a dataset.
| Robert A. Murphy | null | 1503.03488 | null | null |
Diverse Landmark Sampling from Determinantal Point Processes for
Scalable Manifold Learning | cs.LG cs.AI cs.CV | High computational costs of manifold learning prohibit its application for
large point sets. A common strategy to overcome this problem is to perform
dimensionality reduction on selected landmarks and to successively embed the
entire dataset with the Nystr\"om method. The two main challenges that arise
are: (i) the landmarks selected in non-Euclidean geometries must result in a
low reconstruction error, (ii) the graph constructed from sparsely sampled
landmarks must approximate the manifold well. We propose the sampling of
landmarks from determinantal distributions on non-Euclidean spaces. Since
current determinantal sampling algorithms have the same complexity as those for
manifold learning, we present an efficient approximation running in linear
time. Further, we recover the local geometry after the sparsification by
assigning each landmark a local covariance matrix, estimated from the original
point set. The resulting neighborhood selection based on the Bhattacharyya
distance improves the embedding of sparsely sampled manifolds. Our experiments
show a significant performance improvement compared to state-of-the-art
landmark selection techniques.
| Christian Wachinger and Polina Golland | null | 1503.03506 | null | null |
Switching to Learn | cs.LG math.OC stat.ML | A network of agents attempt to learn some unknown state of the world drawn by
nature from a finite set. Agents observe private signals conditioned on the
true state, and form beliefs about the unknown state accordingly. Each agent
may face an identification problem in the sense that she cannot distinguish the
truth in isolation. However, by communicating with each other, agents are able
to benefit from side observations to learn the truth collectively. Unlike many
distributed algorithms which rely on all-time communication protocols, we
propose an efficient method by switching between Bayesian and non-Bayesian
regimes. In this model, agents exchange information only when their private
signals are not informative enough; thence, by switching between the two
regimes, agents efficiently learn the truth using only a few rounds of
communications. The proposed algorithm preserves learnability while incurring a
lower communication cost. We also verify our theoretical findings by simulation
examples.
| Shahin Shahrampour, Mohammad Amin Rahimian, Ali Jadbabaie | null | 1503.03517 | null | null |
Training Binary Multilayer Neural Networks for Image Classification
using Expectation Backpropagation | cs.NE cs.CV cs.LG | Compared to Multilayer Neural Networks with real weights, Binary Multilayer
Neural Networks (BMNNs) can be implemented more efficiently on dedicated
hardware. BMNNs have been demonstrated to be effective on binary classification
tasks with Expectation BackPropagation (EBP) algorithm on high dimensional text
datasets. In this paper, we investigate the capability of BMNNs using the EBP
algorithm on multiclass image classification tasks. The performances of binary
neural networks with multiple hidden layers and different numbers of hidden
units are examined on MNIST. We also explore the effectiveness of image spatial
filters and the dropout technique in BMNNs. Experimental results on MNIST
dataset show that EBP can obtain 2.12% test error with binary weights and 1.66%
test error with real weights, which is comparable to the results of standard
BackPropagation algorithm on fully connected MNNs.
| Zhiyong Cheng, Daniel Soudry, Zexi Mao, Zhenzhong Lan | null | 1503.03562 | null | null |
LINE: Large-scale Information Network Embedding | cs.LG | This paper studies the problem of embedding very large information networks
into low-dimensional vector spaces, which is useful in many tasks such as
visualization, node classification, and link prediction. Most existing graph
embedding methods do not scale for real world information networks which
usually contain millions of nodes. In this paper, we propose a novel network
embedding method called the "LINE," which is suitable for arbitrary types of
information networks: undirected, directed, and/or weighted. The method
optimizes a carefully designed objective function that preserves both the local
and global network structures. An edge-sampling algorithm is proposed that
addresses the limitation of the classical stochastic gradient descent and
improves both the effectiveness and the efficiency of the inference. Empirical
experiments prove the effectiveness of the LINE on a variety of real-world
information networks, including language networks, social networks, and
citation networks. The algorithm is very efficient, which is able to learn the
embedding of a network with millions of vertices and billions of edges in a few
hours on a typical single machine. The source code of the LINE is available
online.
| Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, Qiaozhu Mei | 10.1145/2736277.2741093 | 1503.03578 | null | null |
Deep Unsupervised Learning using Nonequilibrium Thermodynamics | cs.LG cond-mat.dis-nn q-bio.NC stat.ML | A central problem in machine learning involves modeling complex data-sets
using highly flexible families of probability distributions in which learning,
sampling, inference, and evaluation are still analytically or computationally
tractable. Here, we develop an approach that simultaneously achieves both
flexibility and tractability. The essential idea, inspired by non-equilibrium
statistical physics, is to systematically and slowly destroy structure in a
data distribution through an iterative forward diffusion process. We then learn
a reverse diffusion process that restores structure in data, yielding a highly
flexible and tractable generative model of the data. This approach allows us to
rapidly learn, sample from, and evaluate probabilities in deep generative
models with thousands of layers or time steps, as well as to compute
conditional and posterior probabilities under the learned model. We
additionally release an open source reference implementation of the algorithm.
| Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya
Ganguli | null | 1503.03585 | null | null |
Efficient Learning of Linear Separators under Bounded Noise | cs.LG cs.CC | We study the learnability of linear separators in $\Re^d$ in the presence of
bounded (a.k.a Massart) noise. This is a realistic generalization of the random
classification noise model, where the adversary can flip each example $x$ with
probability $\eta(x) \leq \eta$. We provide the first polynomial time algorithm
that can learn linear separators to arbitrarily small excess error in this
noise model under the uniform distribution over the unit ball in $\Re^d$, for
some constant value of $\eta$. While widely studied in the statistical learning
theory community in the context of getting faster convergence rates,
computationally efficient algorithms in this model had remained elusive. Our
work provides the first evidence that one can indeed design algorithms
achieving arbitrarily small excess error in polynomial time under this
realistic noise model and thus opens up a new and exciting line of research.
We additionally provide lower bounds showing that popular algorithms such as
hinge loss minimization and averaging cannot lead to arbitrarily small excess
error under Massart noise, even under the uniform distribution. Our work
instead, makes use of a margin based technique developed in the context of
active learning. As a result, our algorithm is also an active learning
algorithm with label complexity that is only a logarithmic the desired excess
error $\epsilon$.
| Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner | null | 1503.03594 | null | null |
On the Impossibility of Learning the Missing Mass | stat.ML cs.IT cs.LG math.IT math.PR math.ST stat.TH | This paper shows that one cannot learn the probability of rare events without
imposing further structural assumptions. The event of interest is that of
obtaining an outcome outside the coverage of an i.i.d. sample from a discrete
distribution. The probability of this event is referred to as the "missing
mass". The impossibility result can then be stated as: the missing mass is not
distribution-free PAC-learnable in relative error. The proof is
semi-constructive and relies on a coupling argument using a dithered geometric
distribution. This result formalizes the folklore that in order to predict rare
events, one necessarily needs distributions with "heavy tails".
| Elchanan Mossel and Mesrob I. Ohannessian | null | 1503.03613 | null | null |
Hierarchical learning of grids of microtopics | stat.ML cs.IR cs.LG | The counting grid is a grid of microtopics, sparse word/feature
distributions. The generative model associated with the grid does not use these
microtopics individually. Rather, it groups them in overlapping rectangular
windows and uses these grouped microtopics as either mixture or admixture
components. This paper builds upon the basic counting grid model and it shows
that hierarchical reasoning helps avoid bad local minima, produces better
classification accuracy and, most interestingly, allows for extraction of large
numbers of coherent microtopics even from small datasets. We evaluate this in
terms of consistency, diversity and clarity of the indexed content, as well as
in a user study on word intrusion tasks. We demonstrate that these models work
well as a technique for embedding raw images and discuss interesting parallels
between hierarchical CG models and other deep architectures.
| Nebojsa Jojic and Alessandro Perina and Dongwoo Kim | null | 1503.03701 | null | null |
On Graduated Optimization for Stochastic Non-Convex Problems | cs.LG math.OC | The graduated optimization approach, also known as the continuation method,
is a popular heuristic to solving non-convex problems that has received renewed
interest over the last decade. Despite its popularity, very little is known in
terms of theoretical convergence analysis. In this paper we describe a new
first-order algorithm based on graduated optimiza- tion and analyze its
performance. We characterize a parameterized family of non- convex functions
for which this algorithm provably converges to a global optimum. In particular,
we prove that the algorithm converges to an {\epsilon}-approximate solution
within O(1/\epsilon^2) gradient-based steps. We extend our algorithm and
analysis to the setting of stochastic non-convex optimization with noisy
gradient feedback, attaining the same convergence rate. Additionally, we
discuss the setting of zero-order optimization, and devise a a variant of our
algorithm which converges at rate of O(d^2/\epsilon^4).
| Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | null | 1503.03712 | null | null |
Compact Nonlinear Maps and Circulant Extensions | stat.ML cs.LG | Kernel approximation via nonlinear random feature maps is widely used in
speeding up kernel machines. There are two main challenges for the conventional
kernel approximation methods. First, before performing kernel approximation, a
good kernel has to be chosen. Picking a good kernel is a very challenging
problem in itself. Second, high-dimensional maps are often required in order to
achieve good performance. This leads to high computational cost in both
generating the nonlinear maps, and in the subsequent learning and prediction
process. In this work, we propose to optimize the nonlinear maps directly with
respect to the classification objective in a data-dependent fashion. The
proposed approach achieves kernel approximation and kernel learning in a joint
framework. This leads to much more compact maps without hurting the
performance. As a by-product, the same framework can also be used to achieve
more compact kernel maps to approximate a known kernel. We also introduce
Circulant Nonlinear Maps, which uses a circulant-structured projection matrix
to speed up the nonlinear maps for high-dimensional data.
| Felix X. Yu, Sanjiv Kumar, Henry Rowley, Shih-Fu Chang | null | 1503.03893 | null | null |
Approximating Sparse PCA from Incomplete Data | cs.LG cs.IT cs.NA math.IT stat.ML | We study how well one can recover sparse principal components of a data
matrix using a sketch formed from a few of its elements. We show that for a
wide class of optimization problems, if the sketch is close (in the spectral
norm) to the original data matrix, then one can recover a near optimal solution
to the optimization problem by using the sketch. In particular, we use this
approach to obtain sparse principal components and show that for \math{m} data
points in \math{n} dimensions, \math{O(\epsilon^{-2}\tilde k\max\{m,n\})}
elements gives an \math{\epsilon}-additive approximation to the sparse PCA
problem (\math{\tilde k} is the stable rank of the data matrix). We demonstrate
our algorithms extensively on image, text, biological and financial data. The
results show that not only are we able to recover the sparse PCAs from the
incomplete data, but by using our sparse sketch, the running time drops by a
factor of five or more.
| Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail | null | 1503.03903 | null | null |
Interactive Restless Multi-armed Bandit Game and Swarm Intelligence
Effect | cs.AI cs.LG physics.data-an stat.ML | We obtain the conditions for the emergence of the swarm intelligence effect
in an interactive game of restless multi-armed bandit (rMAB). A player competes
with multiple agents. Each bandit has a payoff that changes with a probability
$p_{c}$ per round. The agents and player choose one of three options: (1)
Exploit (a good bandit), (2) Innovate (asocial learning for a good bandit among
$n_{I}$ randomly chosen bandits), and (3) Observe (social learning for a good
bandit). Each agent has two parameters $(c,p_{obs})$ to specify the decision:
(i) $c$, the threshold value for Exploit, and (ii) $p_{obs}$, the probability
for Observe in learning. The parameters $(c,p_{obs})$ are uniformly
distributed. We determine the optimal strategies for the player using complete
knowledge about the rMAB. We show whether or not social or asocial learning is
more optimal in the $(p_{c},n_{I})$ space and define the swarm intelligence
effect. We conduct a laboratory experiment (67 subjects) and observe the swarm
intelligence effect only if $(p_{c},n_{I})$ are chosen so that social learning
is far more optimal than asocial learning.
| Shunsuke Yoshida, Masato Hisakado and Shintaro Mori | 10.1007/s00354-016-0306-y | 1503.03964 | null | null |
LSTM: A Search Space Odyssey | cs.NE cs.LG | Several variants of the Long Short-Term Memory (LSTM) architecture for
recurrent neural networks have been proposed since its inception in 1995. In
recent years, these networks have become the state-of-the-art models for a
variety of machine learning problems. This has led to a renewed interest in
understanding the role and utility of various computational components of
typical LSTM variants. In this paper, we present the first large-scale analysis
of eight LSTM variants on three representative tasks: speech recognition,
handwriting recognition, and polyphonic music modeling. The hyperparameters of
all LSTM variants for each task were optimized separately using random search,
and their importance was assessed using the powerful fANOVA framework. In
total, we summarize the results of 5400 experimental runs ($\approx 15$ years
of CPU time), which makes our study the largest of its kind on LSTM networks.
Our results show that none of the variants can improve upon the standard LSTM
architecture significantly, and demonstrate the forget gate and the output
activation function to be its most critical components. We further observe that
the studied hyperparameters are virtually independent and derive guidelines for
their efficient adjustment.
| Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn\'ik, Bas R.
Steunebrink, J\"urgen Schmidhuber | 10.1109/TNNLS.2016.2582924 | 1503.04069 | null | null |
An Emphatic Approach to the Problem of Off-policy Temporal-Difference
Learning | cs.LG | In this paper we introduce the idea of improving the performance of
parametric temporal-difference (TD) learning algorithms by selectively
emphasizing or de-emphasizing their updates on different time steps. In
particular, we show that varying the emphasis of linear TD($\lambda$)'s updates
in a particular way causes its expected update to become stable under
off-policy training. The only prior model-free TD methods to achieve this with
per-step computation linear in the number of function approximation parameters
are the gradient-TD family of methods including TDC, GTD($\lambda$), and
GQ($\lambda$). Compared to these methods, our _emphatic TD($\lambda$)_ is
simpler and easier to use; it has only one learned parameter vector and one
step-size parameter. Our treatment includes general state-dependent discounting
and bootstrapping functions, and a way of specifying varying degrees of
interest in accurately valuing different states.
| Richard S. Sutton, A. Rupam Mahmood, Martha White | null | 1503.04269 | null | null |
Communication-efficient sparse regression: a one-shot approach | stat.ML cs.LG | We devise a one-shot approach to distributed sparse regression in the
high-dimensional setting. The key idea is to average "debiased" or
"desparsified" lasso estimators. We show the approach converges at the same
rate as the lasso as long as the dataset is not split across too many machines.
We also extend the approach to generalized linear models.
| Jason D. Lee, Yuekai Sun, Qiang Liu, Jonathan E. Taylor | null | 1503.04337 | null | null |
Separable and non-separable data representation for pattern
discrimination | quant-ph cs.CV cs.LG | We provide a complete work-flow, based on the language of quantum information
theory, suitable for processing data for the purpose of pattern recognition.
The main advantage of the introduced scheme is that it can be easily
implemented and applied to process real-world data using modest computation
resources. At the same time it can be used to investigate the difference in the
pattern recognition resulting from the utilization of the tensor product
structure of the space of quantum states. We illustrate this difference by
providing a simple example based on the classification of 2D data.
| Jaros{\l}aw Adam Miszczak | null | 1503.04400 | null | null |
Learning Mixed Membership Community Models in Social Tagging Networks
through Tensor Methods | cs.LG cs.SI stat.ML | Community detection in graphs has been extensively studied both in theory and
in applications. However, detecting communities in hypergraphs is more
challenging. In this paper, we propose a tensor decomposition approach for
guaranteed learning of communities in a special class of hypergraphs modeling
social tagging systems or folksonomies. A folksonomy is a tripartite 3-uniform
hypergraph consisting of (user, tag, resource) hyperedges. We posit a
probabilistic mixed membership community model, and prove that the tensor
method consistently learns the communities under efficient sample complexity
and separation requirements.
| Anima Anandkumar and Hanie Sedghi | null | 1503.04567 | null | null |
Enhanced Image Classification With a Fast-Learning Shallow Convolutional
Neural Network | cs.NE cs.CV cs.LG | We present a neural network architecture and training method designed to
enable very rapid training and low implementation complexity. Due to its
training speed and very few tunable parameters, the method has strong potential
for applications requiring frequent retraining or online training. The approach
is characterized by (a) convolutional filters based on biologically inspired
visual processing filters, (b) randomly-valued classifier-stage input weights,
(c) use of least squares regression to train the classifier output weights in a
single batch, and (d) linear classifier-stage output units. We demonstrate the
efficacy of the method by applying it to image classification. Our results
match existing state-of-the-art results on the MNIST (0.37% error) and
NORB-small (2.2% error) image classification databases, but with very fast
training times compared to standard deep network approaches. The network's
performance on the Google Street View House Number (SVHN) (4% error) database
is also competitive with state-of-the art methods.
| Mark D. McDonnell and Tony Vladusich | null | 1503.04596 | null | null |
More General Queries and Less Generalization Error in Adaptive Data
Analysis | cs.LG cs.DS | Adaptivity is an important feature of data analysis---typically the choice of
questions asked about a dataset depends on previous interactions with the same
dataset. However, generalization error is typically bounded in a non-adaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the
formal study of this problem, and gave the first upper and lower bounds on the
achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathcal{P}$ and a
set of $n$ independent samples $x$ is drawn from $\mathcal{P}$. We seek an
algorithm that, given $x$ as input, "accurately" answers a sequence of
adaptively chosen "queries" about the unknown distribution $\mathcal{P}$. How
many samples $n$ must we draw from the distribution, as a function of the type
of queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions towards resolving this question:
*We give upper bounds on the number of samples $n$ that are needed to answer
statistical queries that improve over the bounds of Dwork et al.
*We prove the first upper bounds on the number of samples required to answer
more general families of queries. These include arbitrary low-sensitivity
queries and the important class of convex risk minimization queries.
As in Dwork et al., our algorithms are based on a connection between
differential privacy and generalization error, but we feel that our analysis is
simpler and more modular, which may be useful for studying these questions in
the future.
| Raef Bassily and Adam Smith and Thomas Steinke and Jonathan Ullman | null | 1503.04843 | null | null |
Long Short-Term Memory Over Tree Structures | cs.CL cs.LG cs.NE | The chain-structured long short-term memory (LSTM) has showed to be effective
in a wide range of problems such as speech recognition and machine translation.
In this paper, we propose to extend it to tree structures, in which a memory
cell can reflect the history memories of multiple child cells or multiple
descendant cells in a recursive process. We call the model S-LSTM, which
provides a principled way of considering long-distance interaction over
hierarchies, e.g., language or image parse structures. We leverage the models
for semantic composition to understand the meaning of text, a fundamental
problem in natural language understanding, and show that it outperforms a
state-of-the-art recursive model by replacing its composition layers with the
S-LSTM memory blocks. We also show that utilizing the given structures is
helpful in achieving a performance better than that without considering the
structures.
| Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo | null | 1503.04881 | null | null |
Energy Sharing for Multiple Sensor Nodes with Finite Buffers | cs.NI cs.LG | We consider the problem of finding optimal energy sharing policies that
maximize the network performance of a system comprising of multiple sensor
nodes and a single energy harvesting (EH) source. Sensor nodes periodically
sense the random field and generate data, which is stored in the corresponding
data queues. The EH source harnesses energy from ambient energy sources and the
generated energy is stored in an energy buffer. Sensor nodes receive energy for
data transmission from the EH source. The EH source has to efficiently share
the stored energy among the nodes in order to minimize the long-run average
delay in data transmission. We formulate the problem of energy sharing between
the nodes in the framework of average cost infinite-horizon Markov decision
processes (MDPs). We develop efficient energy sharing algorithms, namely
Q-learning algorithm with exploration mechanisms based on the $\epsilon$-greedy
method as well as upper confidence bound (UCB). We extend these algorithms by
incorporating state and action space aggregation to tackle state-action space
explosion in the MDP. We also develop a cross entropy based method that
incorporates policy parameterization in order to find near optimal energy
sharing policies. Through simulations, we show that our algorithms yield energy
sharing policies that outperform the heuristic greedy method.
| Sindhu Padakandla, Prabuchandran K.J and Shalabh Bhatnagar | 10.1109/TCOMM.2015.2415777 | 1503.04964 | null | null |
On Extreme Pruning of Random Forest Ensembles for Real-time Predictive
Applications | cs.LG | Random Forest (RF) is an ensemble supervised machine learning technique that
was developed by Breiman over a decade ago. Compared with other ensemble
techniques, it has proved its accuracy and superiority. Many researchers,
however, believe that there is still room for enhancing and improving its
performance accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empiricallthat ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofold. First, it investigates how data clustering (a well known diversity
technique) can be applied to identify groups of similar decision trees in an RF
in order to eliminate redundant trees by selecting a representative from each
group (cluster). Second, these likely diverse representatives are then used to
produce an extension of RF termed CLUB-DRF that is much smaller in size than
RF, and yet performs at least as good as RF, and mostly exhibits higher
performance in terms of accuracy. The latter refers to a known technique called
ensemble pruning. Experimental results on 15 real datasets from the UCI
repository prove the superiority of our proposed extension over the traditional
RF. Most of our experiments achieved at least 95% or above pruning level while
retaining or outperforming the RF accuracy.
| Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan | null | 1503.04996 | null | null |
Ultra-Fast Shapelets for Time Series Classification | cs.LG | Time series shapelets are discriminative subsequences and their similarity to
a time series can be used for time series classification. Since the discovery
of time series shapelets is costly in terms of time, the applicability on long
or multivariate time series is difficult. In this work we propose Ultra-Fast
Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast
Shapelets yield the same prediction quality as current state-of-the-art
shapelet-based time series classifiers that carefully select the shapelets by
being by up to three orders of magnitudes. Since this method allows a
ultra-fast shapelet discovery, using shapelets for long multivariate time
series classification becomes feasible.
A method for using shapelets for multivariate time series is proposed and
Ultra-Fast Shapelets is proven to be successful in comparison to
state-of-the-art multivariate time series classifiers on 15 multivariate time
series datasets from various domains. Finally, time series derivatives that
have proven to be useful for other time series classifiers are investigated for
the shapelet-based classifiers. It is shown that they have a positive impact
and that they are easy to integrate with a simple preprocessing step, without
the need of adapting the shapelet discovery algorithm.
| Martin Wistuba, Josif Grabocka, Lars Schmidt-Thieme | null | 1503.05018 | null | null |
Importance weighting without importance weights: An efficient algorithm
for combinatorial semi-bandits | cs.LG stat.ML | We propose a sample-efficient alternative for importance weighting for
situations where one only has sample access to the probability distribution
that generates the observations. Our new method, called Geometric Resampling
(GR), is described and analyzed in the context of online combinatorial
optimization under semi-bandit feedback, where a learner sequentially selects
its actions from a combinatorial decision set so as to minimize its cumulative
loss. In particular, we show that the well-known Follow-the-Perturbed-Leader
(FPL) prediction method coupled with Geometric Resampling yields the first
computationally efficient reduction from offline to online optimization in this
setting. We provide a thorough theoretical analysis for the resulting
algorithm, showing that its performance is on par with previous, inefficient
solutions. Our main contribution is showing that, despite the relatively large
variance induced by the GR procedure, our performance guarantees hold with high
probability rather than only in expectation. As a side result, we also improve
the best known regret bounds for FPL in online combinatorial optimization with
full feedback, closing the perceived performance gap between FPL and
exponential weights in this setting.
| Gergely Neu and G\'abor Bart\'ok | null | 1503.05087 | null | null |
ProtVec: A Continuous Distributed Representation of Biological Sequences | q-bio.QM cs.AI cs.LG q-bio.GN | We introduce a new representation and feature extraction method for
biological sequences. Named bio-vectors (BioVec) to refer to biological
sequences in general with protein-vectors (ProtVec) for proteins (amino-acid
sequences) and gene-vectors (GeneVec) for gene sequences, this representation
can be widely used in applications of deep learning in proteomics and genomics.
In the present paper, we focus on protein-vectors that can be utilized in a
wide array of bioinformatics investigations such as family classification,
protein visualization, structure prediction, disordered protein identification,
and protein-protein interaction prediction. In this method, we adopt artificial
neural network approaches and represent a protein sequence with a single dense
n-dimensional vector. To evaluate this method, we apply it in classification of
324,018 protein sequences obtained from Swiss-Prot belonging to 7,027 protein
families, where an average family classification accuracy of 93%+-0.06% is
obtained, outperforming existing family classification methods. In addition, we
use ProtVec representation to predict disordered proteins from structured
proteins. Two databases of disordered sequences are used: the DisProt database
as well as a database featuring the disordered regions of nucleoporins rich
with phenylalanine-glycine repeats (FG-Nups). Using support vector machine
classifiers, FG-Nup sequences are distinguished from structured protein
sequences found in Protein Data Bank (PDB) with a 99.8% accuracy, and
unstructured DisProt sequences are differentiated from structured DisProt
sequences with 100.0% accuracy. These results indicate that by only providing
sequence data for various proteins into this model, accurate information about
protein structure can be determined.
| Ehsaneddin Asgari and Mohammad R.K. Mofrad | 10.1371/journal.pone.0141287 | 1503.05140 | null | null |
An Outlier Detection-based Tree Selection Approach to Extreme Pruning of
Random Forests | cs.LG | Random Forest (RF) is an ensemble classification technique that was developed
by Breiman over a decade ago. Compared with other ensemble techniques, it has
proved its accuracy and superiority. Many researchers, however, believe that
there is still room for enhancing and improving its performance in terms of
predictive accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empirically that ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofolds. First, it investigates how an unsupervised learning technique,
namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the
RF. Second, trees with the highest LOF scores are then used to produce an
extension of RF termed LOFB-DRF that is much smaller in size than RF, and yet
performs at least as good as RF, but mostly exhibits higher performance in
terms of accuracy. The latter refers to a known technique called ensemble
pruning. Experimental results on 10 real datasets prove the superiority of our
proposed extension over the traditional RF. Unprecedented pruning levels
reaching 99% have been achieved at the time of boosting the predictive accuracy
of the ensemble. The notably high pruning level makes the technique a good
candidate for real-time applications.
| Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan | null | 1503.05187 | null | null |
Analysis of PCA Algorithms in Distributed Environments | cs.DC cs.LG cs.NA | Classical machine learning algorithms often face scalability bottlenecks when
they are applied to large-scale data. Such algorithms were designed to work
with small data that is assumed to fit in the memory of one machine. In this
report, we analyze different methods for computing an important machine learing
algorithm, namely Principal Component Analysis (PCA), and we comment on its
limitations in supporting large datasets. The methods are analyzed and compared
across two important metrics: time complexity and communication complexity. We
consider the worst-case scenarios for both metrics, and we identify the
software libraries that implement each method. The analysis in this report
helps researchers and engineers in (i) understanding the main bottlenecks for
scalability in different PCA algorithms, (ii) choosing the most appropriate
method and software library for a given application and data set
characteristics, and (iii) designing new scalable PCA algorithms.
| Tarek Elgamal, Mohamed Hefeeda | null | 1503.05214 | null | null |
Efficient Machine Learning for Big Data: A Review | cs.LG cs.AI | With the emerging technologies and all associated devices, it is predicted
that massive amount of data will be created in the next few years, in fact, as
much as 90% of current data were created in the last couple of years,a trend
that will continue for the foreseeable future. Sustainable computing studies
the process by which computer engineer/scientist designs computers and
associated subsystems efficiently and effectively with minimal impact on the
environment. However, current intelligent machine-learning systems are
performance driven, the focus is on the predictive/classification accuracy,
based on known properties learned from the training samples. For instance, most
machine-learning-based nonparametric models are known to require high
computational cost in order to find the global optima. With the learning task
in a large dataset, the number of hidden nodes within the network will
therefore increase significantly, which eventually leads to an exponential rise
in computational complexity. This paper thus reviews the theoretical and
experimental data-modeling literature, in large-scale data-intensive fields,
relating to: (1) model efficiency, including computational requirements in
learning, and data-intensive areas structure and design, and introduces (2) new
algorithmic approaches with the least memory requirements and processing to
minimize computational cost, while maintaining/improving its
predictive/classification accuracy and stability.
| O. Y. Al-Jarrah, P. D. Yoo, S Muhaidat, G. K. Karagiannidis, and K.
Taha | null | 1503.05296 | null | null |
Shared latent subspace modelling within Gaussian-Binary Restricted
Boltzmann Machines for NIST i-Vector Challenge 2014 | cs.LG cs.NE cs.SD stat.ML | This paper presents a novel approach to speaker subspace modelling based on
Gaussian-Binary Restricted Boltzmann Machines (GRBM). The proposed model is
based on the idea of shared factors as in the Probabilistic Linear Discriminant
Analysis (PLDA). GRBM hidden layer is divided into speaker and channel factors,
herein the speaker factor is shared over all vectors of the speaker. Then
Maximum Likelihood Parameter Estimation (MLE) for proposed model is introduced.
Various new scoring techniques for speaker verification using GRBM are
proposed. The results for NIST i-vector Challenge 2014 dataset are presented.
| Danila Doroshin, Alexander Yamshinin, Nikolay Lubimov, Marina
Nastasenko, Mikhail Kotov, Maxim Tkachenko | null | 1503.05471 | null | null |
Interpretable Aircraft Engine Diagnostic via Expert Indicator
Aggregation | stat.ML cs.LG math.ST stat.AP stat.TH | Detecting early signs of failures (anomalies) in complex systems is one of
the main goal of preventive maintenance. It allows in particular to avoid
actual failures by (re)scheduling maintenance operations in a way that
optimizes maintenance costs. Aircraft engine health monitoring is one
representative example of a field in which anomaly detection is crucial.
Manufacturers collect large amount of engine related data during flights which
are used, among other applications, to detect anomalies. This article
introduces and studies a generic methodology that allows one to build automatic
early signs of anomaly detection in a way that builds upon human expertise and
that remains understandable by human operators who make the final maintenance
decision. The main idea of the method is to generate a very large number of
binary indicators based on parametric anomaly scores designed by experts,
complemented by simple aggregations of those scores. A feature selection method
is used to keep only the most discriminant indicators which are used as inputs
of a Naive Bayes classifier. This give an interpretable classifier based on
interpretable anomaly detectors whose parameters have been optimized indirectly
by the selection process. The proposed methodology is evaluated on simulated
data designed to reproduce some of the anomaly types observed in real world
engines.
| Tsirizo Rabenoro (SAMM), J\'er\^ome Lacaille, Marie Cottrell (SAMM),
Fabrice Rossi (SAMM) | null | 1503.05526 | null | null |
GSNs : Generative Stochastic Networks | cs.LG | We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. Because the
transition distribution is a conditional distribution generally involving a
small move, it has fewer dominant modes, being unimodal in the limit of small
moves. Thus, it is easier to learn, more like learning to perform supervised
function approximation, with gradients that can be obtained by
back-propagation. The theorems provided here generalize recent work on the
probabilistic interpretation of denoising auto-encoders and provide an
interesting justification for dependency networks and generalized
pseudolikelihood (along with defining an appropriate joint distribution and
sampling mechanism, even when the conditionals are not consistent). We study
how GSNs can be used with missing inputs and can be used to sample subsets of
variables given the rest. Successful experiments are conducted, validating
these theoretical results, on two image datasets and with a particular
architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows
training to proceed with backprop, without the need for layerwise pretraining.
| Guillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric
Thibodeau-Laufer, Saizheng Zhang, Pascal Vincent | null | 1503.05571 | null | null |
Learning to Search for Dependencies | cs.CL cs.LG | We demonstrate that a dependency parser can be built using a credit
assignment compiler which removes the burden of worrying about low-level
machine learning details from the parser implementation. The result is a simple
parser which robustly applies to many languages that provides similar
statistical and computational performance with best-to-date transition-based
parsing approaches, while avoiding various downsides including randomization,
extra feature requirements, and custom learning algorithms.
| Kai-Wei Chang, He He, Hal Daum\'e III, John Langford | null | 1503.05615 | null | null |
Optimizing Neural Networks with Kronecker-factored Approximate Curvature | cs.LG cs.NE stat.ML | We propose an efficient method for approximating natural gradient descent in
neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC).
K-FAC is based on an efficiently invertible approximation of a neural network's
Fisher information matrix which is neither diagonal nor low-rank, and in some
cases is completely non-sparse. It is derived by approximating various large
blocks of the Fisher (corresponding to entire layers) as being the Kronecker
product of two much smaller matrices. While only several times more expensive
to compute than the plain stochastic gradient, the updates produced by K-FAC
make much more progress optimizing the objective, which results in an algorithm
that can be much faster than stochastic gradient descent with momentum in
practice. And unlike some previously proposed approximate
natural-gradient/Newton methods which use high-quality non-diagonal curvature
matrices (such as Hessian-free optimization), K-FAC works very well in highly
stochastic optimization regimes. This is because the cost of storing and
inverting K-FAC's approximation to the curvature matrix does not depend on the
amount of data used to estimate it, which is a feature typically associated
only with diagonal or low-rank approximations to the curvature matrix.
| James Martens, Roger Grosse | null | 1503.05671 | null | null |
A Neural Transfer Function for a Smooth and Differentiable Transition
Between Additive and Multiplicative Interactions | stat.ML cs.LG cs.NE | Existing approaches to combine both additive and multiplicative neural units
either use a fixed assignment of operations or require discrete optimization to
determine what function a neuron should perform. This leads either to an
inefficient distribution of computational resources or an extensive increase in
the computational complexity of the training procedure.
We present a novel, parameterizable transfer function based on the
mathematical concept of non-integer functional iteration that allows the
operation each neuron performs to be smoothly and, most importantly,
differentiablely adjusted between addition and multiplication. This allows the
decision between addition and multiplication to be integrated into the standard
backpropagation training procedure.
| Sebastian Urban, Patrick van der Smagt | null | 1503.05724 | null | null |
Implementation of a Practical Distributed Calculation System with
Browsers and JavaScript, and Application to Distributed Deep Learning | cs.DC cs.LG cs.MS cs.NE stat.ML | Deep learning can achieve outstanding results in various fields. However, it
requires so significant computational power that graphics processing units
(GPUs) and/or numerous computers are often required for the practical
application. We have developed a new distributed calculation framework called
"Sashimi" that allows any computer to be used as a distribution node only by
accessing a website. We have also developed a new JavaScript neural network
framework called "Sukiyaki" that uses general purpose GPUs with web browsers.
Sukiyaki performs 30 times faster than a conventional JavaScript library for
deep convolutional neural networks (deep CNNs) learning. The combination of
Sashimi and Sukiyaki, as well as new distribution algorithms, demonstrates the
distributed deep learning of deep CNNs only with web browsers on various
devices. The libraries that comprise the proposed methods are available under
MIT license at http://mil-tokyo.github.io/.
| Ken Miura and Tatsuya Harada | null | 1503.05743 | null | null |
Learning Hypergraph-regularized Attribute Predictors | cs.CV cs.LG | We present a novel attribute learning framework named Hypergraph-based
Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the
attribute relations in the data. Then the attribute prediction problem is
casted as a regularized hypergraph cut problem in which HAP jointly learns a
collection of attribute projections from the feature space to a hypergraph
embedding space aligned with the attribute space. The learned projections
directly act as attribute classifiers (linear and kernelized). This formulation
leads to a very efficient approach. By considering our model as a multi-graph
cut task, our framework can flexibly incorporate other available information,
in particular class label. We apply our approach to attribute prediction,
Zero-shot and $N$-shot learning tasks. The results on AWA, USAA and CUB
databases demonstrate the value of our methods in comparison with the
state-of-the-art approaches.
| Sheng Huang and Mohamed Elhoseiny and Ahmed Elgammal and Dan Yang | null | 1503.05782 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.