title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A Survey: Time Travel in Deep Learning Space: An Introduction to Deep
Learning Models and How Deep Learning Models Evolved from the Initial Ideas | cs.LG cs.NE | This report will show the history of deep learning evolves. It will trace
back as far as the initial belief of connectionism modelling of brain, and come
back to look at its early stage realization: neural networks. With the
background of neural network, we will gradually introduce how convolutional
neural network, as a representative of deep discriminative models, is developed
from neural networks, together with many practical techniques that can help in
optimization of neural networks. On the other hand, we will also trace back to
see the evolution history of deep generative models, to see how researchers
balance the representation power and computation complexity to reach Restricted
Boltzmann Machine and eventually reach Deep Belief Nets. Further, we will also
look into the development history of modelling time series data with neural
networks. We start with Time Delay Neural Networks and move further to
currently famous model named Recurrent Neural Network and its extension Long
Short Term Memory. We will also briefly look into how to construct deep
recurrent neural networks. Finally, we will conclude this report with some
interesting open-ended questions of deep neural networks.
| Haohan Wang and Bhiksha Raj | null | 1510.04781 | null | null |
Quantification in-the-wild: data-sets and baselines | cs.LG | Quantification is the task of estimating the class-distribution of a
data-set. While typically considered as a parameter estimation problem with
strict assumptions on the data-set shift, we consider quantification
in-the-wild, on two large scale data-sets from marine ecology: a survey of
Caribbean coral reefs, and a plankton time series from Martha's Vineyard
Coastal Observatory. We investigate several quantification methods from the
literature and indicate opportunities for future work. In particular, we show
that a deep neural network can be fine-tuned on a very limited amount of data
(25 - 100 samples) to outperform alternative methods.
| Oscar Beijbom and Judy Hoffman and Evan Yao and Trevor Darrell and
Alberto Rodriguez-Ramirez and Manuel Gonzalez-Rivero and Ove Hoegh - Guldberg | null | 1510.04811 | null | null |
Scalable MCMC for Mixed Membership Stochastic Blockmodels | cs.LG stat.ML | We propose a stochastic gradient Markov chain Monte Carlo (SG-MCMC) algorithm
for scalable inference in mixed-membership stochastic blockmodels (MMSB). Our
algorithm is based on the stochastic gradient Riemannian Langevin sampler and
achieves both faster speed and higher accuracy at every iteration than the
current state-of-the-art algorithm based on stochastic variational inference.
In addition we develop an approximation that can handle models that entertain a
very large number of communities. The experimental results show that SG-MCMC
strictly dominates competing algorithms in all cases.
| Wenzhe Li, Sungjin Ahn, Max Welling | null | 1510.04815 | null | null |
SGD with Variance Reduction beyond Empirical Risk Minimization | stat.ML cs.LG | We introduce a doubly stochastic proximal gradient algorithm for optimizing a
finite average of smooth convex functions, whose gradients depend on
numerically expensive expectations. Our main motivation is the acceleration of
the optimization of the regularized Cox partial-likelihood (the core model used
in survival analysis), but our algorithm can be used in different settings as
well. The proposed algorithm is doubly stochastic in the sense that gradient
steps are done using stochastic gradient descent (SGD) with variance reduction,
where the inner expectations are approximated by a Monte-Carlo Markov-Chain
(MCMC) algorithm. We derive conditions on the MCMC number of iterations
guaranteeing convergence, and obtain a linear rate of convergence under strong
convexity and a sublinear rate without this assumption. We illustrate the fact
that our algorithm improves the state-of-the-art solver for regularized Cox
partial-likelihood on several datasets from survival analysis.
| Massil Achab (CMAP), Agathe Guilloux (LSTA), St\'ephane Ga\"iffas
(CMAP) and Emmanuel Bacry (CMAP) | null | 1510.04822 | null | null |
Robust Partially-Compressed Least-Squares | stat.ML cs.LG | Randomized matrix compression techniques, such as the Johnson-Lindenstrauss
transform, have emerged as an effective and practical way for solving
large-scale problems efficiently. With a focus on computational efficiency,
however, forsaking solutions quality and accuracy becomes the trade-off. In
this paper, we investigate compressed least-squares problems and propose new
models and algorithms that address the issue of error and noise introduced by
compression. While maintaining computational efficiency, our models provide
robust solutions that are more accurate--relative to solutions of uncompressed
least-squares--than those of classical compressed variants. We introduce tools
from robust optimization together with a form of partial compression to improve
the error-time trade-offs of compressed least-squares solvers. We develop an
efficient solution algorithm for our Robust Partially-Compressed (RPC) model
based on a reduction to a one-dimensional search. We also derive the first
approximation error bounds for Partially-Compressed least-squares solutions.
Empirical results comparing numerous alternatives suggest that robust and
partially compressed solutions are effectively insulated against aggressive
randomized transforms.
| Stephen Becker, Ban Kawas, Marek Petrik, Karthikeyan N. Ramamurthy | null | 1510.04905 | null | null |
Bad Universal Priors and Notions of Optimality | cs.AI cs.LG | A big open question of algorithmic information theory is the choice of the
universal Turing machine (UTM). For Kolmogorov complexity and Solomonoff
induction we have invariance theorems: the choice of the UTM changes bounds
only by a constant. For the universally intelligent agent AIXI (Hutter, 2005)
no invariance theorem is known. Our results are entirely negative: we discuss
cases in which unlucky or adversarial choices of the UTM cause AIXI to
misbehave drastically. We show that Legg-Hutter intelligence and thus balanced
Pareto optimality is entirely subjective, and that every policy is Pareto
optimal in the class of all computable environments. This undermines all
existing optimality properties for AIXI. While it may still serve as a gold
standard for AI, our results imply that AIXI is a relative theory, dependent on
the choice of the UTM.
| Jan Leike and Marcus Hutter | null | 1510.04931 | null | null |
Holographic Embeddings of Knowledge Graphs | cs.AI cs.LG stat.ML | Learning embeddings of entities and relations is an efficient and versatile
method to perform machine learning on relational data such as knowledge graphs.
In this work, we propose holographic embeddings (HolE) to learn compositional
vector space representations of entire knowledge graphs. The proposed method is
related to holographic models of associative memory in that it employs circular
correlation to create compositional representations. By using correlation as
the compositional operator HolE can capture rich interactions but
simultaneously remains efficient to compute, easy to train, and scalable to
very large datasets. In extensive experiments we show that holographic
embeddings are able to outperform state-of-the-art methods for link prediction
in knowledge graphs and relational learning benchmark datasets.
| Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio | null | 1510.04935 | null | null |
Optimizing and Contrasting Recurrent Neural Network Architectures | stat.ML cs.LG cs.NE | Recurrent Neural Networks (RNNs) have long been recognized for their
potential to model complex time series. However, it remains to be determined
what optimization techniques and recurrent architectures can be used to best
realize this potential. The experiments presented take a deep look into Hessian
free optimization, a powerful second order optimization method that has shown
promising results, but still does not enjoy widespread use. This algorithm was
used to train to a number of RNN architectures including standard RNNs, long
short-term memory, multiplicative RNNs, and stacked RNNs on the task of
character prediction. The insights from these experiments led to the creation
of a new multiplicative LSTM hybrid architecture that outperformed both LSTM
and multiplicative RNNs. When tested on a larger scale, multiplicative LSTM
achieved character level modelling results competitive with the state of the
art for RNNs using very different methodology.
| Ben Krause | null | 1510.04953 | null | null |
Improving the Speed of Response of Learning Algorithms Using Multiple
Models | cs.LG | This is the first of a series of papers that the authors propose to write on
the subject of improving the speed of response of learning systems using
multiple models. During the past two decades, the first author has worked on
numerous methods for improving the stability, robustness, and performance of
adaptive systems using multiple models and the other authors have collaborated
with him on some of them. Independently, they have also worked on several
learning methods, and have considerable experience with their advantages and
limitations. In particular, they are well aware that it is common knowledge
that machine learning is in general very slow. Numerous attempts have been made
by researchers to improve the speed of convergence of algorithms in different
contexts. In view of the success of multiple model based methods in improving
the speed of convergence in adaptive systems, the authors believe that the same
approach will also prove fruitful in the domain of learning. In this paper, a
first attempt is made to use multiple models for improving the speed of
response of the simplest learning schemes that have been studied. i.e. Learning
Automata.
| Kumpati S. Narendra, Snehasis Mukhopadyhay, and Yu Wang | null | 1510.05034 | null | null |
A cost function for similarity-based hierarchical clustering | cs.DS cs.LG stat.ML | The development of algorithms for hierarchical clustering has been hampered
by a shortage of precise objective functions. To help address this situation,
we introduce a simple cost function on hierarchies over a set of points, given
pairwise similarities between those points. We show that this criterion behaves
sensibly in canonical instances and that it admits a top-down construction
procedure with a provably good approximation ratio.
| Sanjoy Dasgupta | null | 1510.05043 | null | null |
How Important is Weight Symmetry in Backpropagation? | cs.LG | Gradient backpropagation (BP) requires symmetric feedforward and feedback
connections -- the same weights must be used for forward and backward passes.
This "weight transport problem" (Grossberg 1987) is thought to be one of the
main reasons to doubt BP's biologically plausibility. Using 15 different
classification datasets, we systematically investigate to what extent BP really
depends on weight symmetry. In a study that turned out to be surprisingly
similar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014)
but orthogonal in its results, our experiments indicate that: (1) the
magnitudes of feedback weights do not matter to performance (2) the signs of
feedback weights do matter -- the more concordant signs between feedforward and
their corresponding feedback connections, the better (3) with feedback weights
having random magnitudes and 100% concordant signs, we were able to achieve the
same or even better performance than SGD. (4) some
normalizations/stabilizations are indispensable for such asymmetric BP to work,
namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a "Batch
Manhattan" (BM) update rule.
| Qianli Liao, Joel Z. Leibo, Tomaso Poggio | null | 1510.05067 | null | null |
Clustering Noisy Signals with Structured Sparsity Using Time-Frequency
Representation | cs.LG stat.ML | We propose a simple and efficient time-series clustering framework
particularly suited for low Signal-to-Noise Ratio (SNR), by simultaneous
smoothing and dimensionality reduction aimed at preserving clustering
information. We extend the sparse K-means algorithm by incorporating structured
sparsity, and use it to exploit the multi-scale property of wavelets and group
structure in multivariate signals. Finally, we extract features invariant to
translation and scaling with the scattering transform, which corresponds to a
convolutional network with filters given by a wavelet operator, and use the
network's structure in sparse clustering. By promoting sparsity, this transform
can yield a low-dimensional representation of signals that gives improved
clustering results on several real datasets.
| Tom Hope, Avishai Wagner and Or Zuk | null | 1510.05214 | null | null |
Large Enforced Sparse Non-Negative Matrix Factorization | cs.LG cs.NA cs.SI | Non-negative matrix factorization (NMF) is a common method for generating
topic models from text data. NMF is widely accepted for producing good results
despite its relative simplicity of implementation and ease of computation. One
challenge with applying NMF to large datasets is that intermediate matrix
products often become dense, stressing the memory and compute elements of a
system. In this article, we investigate a simple but powerful modification of a
common NMF algorithm that enforces the generation of sparse intermediate and
output matrices. This method enables the application of NMF to large datasets
through improved memory and compute performance. Further, we demonstrate
empirically that this method of enforcing sparsity in the NMF either preserves
or improves both the accuracy of the resulting topic model and the convergence
rate of the underlying algorithm.
| Brendan Gavin and Vijay Gadepally and Jeremy Kepner | 10.1109/IPDPSW.2016.58 | 1510.05237 | null | null |
Latent Space Model for Multi-Modal Social Data | cs.SI cs.LG physics.data-an physics.soc-ph | With the emergence of social networking services, researchers enjoy the
increasing availability of large-scale heterogenous datasets capturing online
user interactions and behaviors. Traditional analysis of techno-social systems
data has focused mainly on describing either the dynamics of social
interactions, or the attributes and behaviors of the users. However,
overwhelming empirical evidence suggests that the two dimensions affect one
another, and therefore they should be jointly modeled and analyzed in a
multi-modal framework. The benefits of such an approach include the ability to
build better predictive models, leveraging social network information as well
as user behavioral signals. To this purpose, here we propose the Constrained
Latent Space Model (CLSM), a generalized framework that combines Mixed
Membership Stochastic Blockmodels (MMSB) and Latent Dirichlet Allocation (LDA)
incorporating a constraint that forces the latent space to concurrently
describe the multiple data modalities. We derive an efficient inference
algorithm based on Variational Expectation Maximization that has a
computational cost linear in the size of the network, thus making it feasible
to analyze massive social datasets. We validate the proposed framework on two
problems: prediction of social interactions from user attributes and behaviors,
and behavior prediction exploiting network information. We perform experiments
with a variety of multi-modal social systems, spanning location-based social
networks (Gowalla), social media services (Instagram, Orkut), e-commerce and
review sites (Amazon, Ciao), and finally citation networks (Cora). The results
indicate significant improvement in prediction accuracy over state of the art
methods, and demonstrate the flexibility of the proposed approach for
addressing a variety of different learning problems commonly occurring with
multi-modal social data.
| Yoon-Sik Cho, Greg Ver Steeg, Emilio Ferrara, Aram Galstyan | 10.1145/2872427.2883031 | 1510.05318 | null | null |
Clustering is Easy When ....What? | stat.ML cs.LG | It is well known that most of the common clustering objectives are NP-hard to
optimize. In practice, however, clustering is being routinely carried out. One
approach for providing theoretical understanding of this seeming discrepancy is
to come up with notions of clusterability that distinguish realistically
interesting input data from worst-case data sets. The hope is that there will
be clustering algorithms that are provably efficient on such "clusterable"
instances. This paper addresses the thesis that the computational hardness of
clustering tasks goes away for inputs that one really cares about. In other
words, that "Clustering is difficult only when it does not matter" (the
\emph{CDNM thesis} for short).
I wish to present a a critical bird's eye overview of the results published
on this issue so far and to call attention to the gap between available and
desirable results on this issue. A longer, more detailed version of this note
is available as arXiv:1507.05307.
I discuss which requirements should be met in order to provide formal support
to the the CDNM thesis and then examine existing results in view of these
requirements and list some significant unsolved research challenges in that
direction.
| Shai Ben-David | null | 1510.05336 | null | null |
Piecewise-Linear Approximation for Feature Subset Selection in a
Sequential Logit Model | stat.ME cs.LG math.OC stat.ML | This paper concerns a method of selecting a subset of features for a
sequential logit model. Tanaka and Nakagawa (2014) proposed a mixed integer
quadratic optimization formulation for solving the problem based on a quadratic
approximation of the logistic loss function. However, since there is a
significant gap between the logistic loss function and its quadratic
approximation, their formulation may fail to find a good subset of features. To
overcome this drawback, we apply a piecewise-linear approximation to the
logistic loss function. Accordingly, we frame the feature subset selection
problem of minimizing an information criterion as a mixed integer linear
optimization problem. The computational results demonstrate that our
piecewise-linear approximation approach found a better subset of features than
the quadratic approximation approach.
| Toshiki Sato, Yuichi Takano, Ryuhei Miyashiro | null | 1510.05417 | null | null |
Accelerometer based Activity Classification with Variational Inference
on Sticky HDP-SLDS | cs.LG stat.ML | As part of daily monitoring of human activities, wearable sensors and devices
are becoming increasingly popular sources of data. With the advent of
smartphones equipped with acceloremeter, gyroscope and camera; it is now
possible to develop activity classification platforms everyone can use
conveniently. In this paper, we propose a fast inference method for an
unsupervised non-parametric time series model namely variational inference for
sticky HDP-SLDS(Hierarchical Dirichlet Process Switching Linear Dynamical
System). We show that the proposed algorithm can differentiate various indoor
activities such as sitting, walking, turning, going up/down the stairs and
taking the elevator using only the acceloremeter of an Android smartphone
Samsung Galaxy S4. We used the front camera of the smartphone to annotate
activity types precisely. We compared the proposed method with Hidden Markov
Models with Gaussian emission probabilities on a dataset of 10 subjects. We
showed that the efficacy of the stickiness property. We further compared the
variational inference to the Gibbs sampler on the same model and show that
variational inference is faster in one order of magnitude.
| Mehmet Emin Basbug, Koray Ozcan and Senem Velipasalar | null | 1510.05477 | null | null |
AdaCluster : Adaptive Clustering for Heterogeneous Data | cs.LG | Clustering algorithms start with a fixed divergence, which captures the
possibly asymmetric distance between a sample and a centroid. In the mixture
model setting, the sample distribution plays the same role. When all attributes
have the same topology and dispersion, the data are said to be homogeneous. If
the prior knowledge of the distribution is inaccurate or the set of plausible
distributions is large, an adaptive approach is essential. The motivation is
more compelling for heterogeneous data, where the dispersion or the topology
differs among attributes. We propose an adaptive approach to clustering using
classes of parametrized Bregman divergences. We first show that the density of
a steep exponential dispersion model (EDM) can be represented with a Bregman
divergence. We then propose AdaCluster, an expectation-maximization (EM)
algorithm to cluster heterogeneous data using classes of steep EDMs. We compare
AdaCluster with EM for a Gaussian mixture model on synthetic data and nine UCI
data sets. We also propose an adaptive hard clustering algorithm based on
Generalized Method of Moments. We compare the hard clustering algorithm with
k-means on the UCI data sets. We empirically verified that adaptively learning
the underlying topology yields better clustering of heterogeneous data.
| Mehmet Emin Basbug and Barbara Engelhardt | null | 1510.05491 | null | null |
Application of Machine Learning Techniques in Human Activity Recognition | cs.LG | Human activity detection has seen a tremendous growth in the last decade
playing a major role in the field of pervasive computing. This emerging
popularity can be attributed to its myriad of real-life applications primarily
dealing with human-centric problems like healthcare and elder care. Many
research attempts with data mining and machine learning techniques have been
undergoing to accurately detect human activities for e-health systems. This
paper reviews some of the predictive data mining algorithms and compares the
accuracy and performances of these models. A discussion on the future research
directions is subsequently offered.
| Jitenkumar Babubhai Rana, Rashmi Shetty, Tanya Jha | null | 1510.05577 | null | null |
Stochastically Transitive Models for Pairwise Comparisons: Statistical
and Computational Issues | stat.ML cs.IT cs.LG math.IT | There are various parametric models for analyzing pairwise comparison data,
including the Bradley-Terry-Luce (BTL) and Thurstone models, but their reliance
on strong parametric assumptions is limiting. In this work, we study a flexible
model for pairwise comparisons, under which the probabilities of outcomes are
required only to satisfy a natural form of stochastic transitivity. This class
includes parametric models including the BTL and Thurstone models as special
cases, but is considerably more general. We provide various examples of models
in this broader stochastically transitive class for which classical parametric
models provide poor fits. Despite this greater flexibility, we show that the
matrix of probabilities can be estimated at the same rate as in standard
parametric models. On the other hand, unlike in the BTL and Thurstone models,
computing the minimax-optimal estimator in the stochastically transitive model
is non-trivial, and we explore various computationally tractable alternatives.
We show that a simple singular value thresholding algorithm is statistically
consistent but does not achieve the minimax rate. We then propose and study
algorithms that achieve the minimax rate over interesting sub-classes of the
full stochastically transitive class. We complement our theoretical results
with thorough numerical simulations.
| Nihar B. Shah, Sivaraman Balakrishnan, Adityanand Guntuboyina and
Martin J. Wainwright | null | 1510.05610 | null | null |
Protein Structure Prediction by Protein Alignments | cs.CE cs.LG q-bio.BM | Proteins are the basic building blocks of life. They usually perform
functions by folding to a particular structure. Understanding the folding
process could help the researchers to understand the functions of proteins and
could also help to develop supplemental proteins for people with deficiencies
and gain more insight into diseases associated with troublesome folding
proteins. Experimental methods are both expensive and time consuming. In this
thesis I introduce a new machine learning based method to predict the protein
structure. The new method improves the performance from two directions:
creating accurate protein alignments and predicting accurate protein contacts.
First, I present an alignment framework MRFalign which goes beyond
state-of-the-art methods and uses Markov Random Fields to model a protein
family and align two proteins by aligning two MRFs together. Compared to other
methods, that can only model local-range residue correlation, MRFs can model
long-range residue interactions and thus, encodes global information in a
protein. Secondly, I present a Group Graphical Lasso method for contact
prediction that integrates joint multi-family Evolutionary Coupling analysis
and supervised learning to improve accuracy on proteins without many sequence
homologs. Different from single-family EC analysis that uses residue
co-evolution information in only the target protein family, our joint EC
analysis uses residue co-evolution in both the target family and its related
families, which may have divergent sequences but similar folds. Our method can
also integrate supervised learning methods to further improve accuracy. We
evaluate the performance of both methods including each of its components on
large public benchmarks. Experiments show that our methods can achieve better
accuracy than existing state-of-the-art methods under all the measurements on
most of the protein classes.
| Jianzhu Ma | null | 1510.05682 | null | null |
Qualitative Projection Using Deep Neural Networks | cs.NE cs.LG | Deep neural networks (DNN) abstract by demodulating the output of linear
filters. In this article, we refine this definition of abstraction to show that
the inputs of a DNN are abstracted with respect to the filters. Or, to restate,
the abstraction is qualified by the filters. This leads us to introduce the
notion of qualitative projection. We use qualitative projection to abstract
MNIST hand-written digits with respect to the various dogs, horses, planes and
cars of the CIFAR dataset. We then classify the MNIST digits according to the
magnitude of their dogness, horseness, planeness and carness qualities,
illustrating the generality of qualitative projection.
| Andrew J.R. Simpson | null | 1510.05711 | null | null |
Unsupervised Ensemble Learning with Dependent Classifiers | cs.LG stat.ML | In unsupervised ensemble learning, one obtains predictions from multiple
sources or classifiers, yet without knowing the reliability and expertise of
each source, and with no labeled data to assess it. The task is to combine
these possibly conflicting predictions into an accurate meta-learner. Most
works to date assumed perfect diversity between the different sources, a
property known as conditional independence. In realistic scenarios, however,
this assumption is often violated, and ensemble learners based on it can be
severely sub-optimal. The key challenges we address in this paper are:\ (i) how
to detect, in an unsupervised manner, strong violations of conditional
independence; and (ii) construct a suitable meta-learner. To this end we
introduce a statistical model that allows for dependencies between classifiers.
Our main contributions are the development of novel unsupervised methods to
detect strongly dependent classifiers, better estimate their accuracies, and
construct an improved meta-learner. Using both artificial and real datasets, we
showcase the importance of taking classifier dependencies into account and the
competitive performance of our approach.
| Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger | null | 1510.05830 | null | null |
Binary Speaker Embedding | cs.SD cs.LG | The popular i-vector model represents speakers as low-dimensional continuous
vectors (i-vectors), and hence it is a way of continuous speaker embedding. In
this paper, we investigate binary speaker embedding, which transforms i-vectors
to binary vectors (codes) by a hash function. We start from locality sensitive
hashing (LSH), a simple binarization approach where binary codes are derived
from a set of random hash functions. A potential problem of LSH is that the
randomly sampled hash functions might be suboptimal. We therefore propose an
improved Hamming distance learning approach, where the hash function is learned
by a variable-sized block training that projects each dimension of the original
i-vectors to variable-sized binary codes independently. Our experiments show
that binary speaker embedding can deliver competitive or even better results on
both speaker verification and identification tasks, while the memory usage and
the computation cost are significantly reduced.
| Lantian Li and Dong Wang and Chao Xing and Kaimin Yu and Thomas Fang
Zheng | null | 1510.05937 | null | null |
Max-margin Metric Learning for Speaker Recognition | cs.SD cs.LG | Probabilistic linear discriminant analysis (PLDA) is a popular normalization
approach for the i-vector model, and has delivered state-of-the-art performance
in speaker recognition. A potential problem of the PLDA model, however, is that
it essentially assumes Gaussian distributions over speaker vectors, which is
not always true in practice. Additionally, the objective function is not
directly related to the goal of the task, e.g., discriminating true speakers
and imposters. In this paper, we propose a max-margin metric learning approach
to solve the problems. It learns a linear transform with a criterion that the
margin between target and imposter trials are maximized. Experiments conducted
on the SRE08 core test show that compared to PLDA, the new approach can obtain
comparable or even better performance, though the scoring is simply a cosine
computation.
| Lantian Li and Dong Wang and Chao Xing and Thomas Fang Zheng | null | 1510.05940 | null | null |
Optimal Cluster Recovery in the Labeled Stochastic Block Model | math.PR cs.LG cs.SI stat.ML | We consider the problem of community detection or clustering in the labeled
Stochastic Block Model (LSBM) with a finite number $K$ of clusters of sizes
linearly growing with the global population of items $n$. Every pair of items
is labeled independently at random, and label $\ell$ appears with probability
$p(i,j,\ell)$ between two items in clusters indexed by $i$ and $j$,
respectively. The objective is to reconstruct the clusters from the observation
of these random labels.
Clustering under the SBM and their extensions has attracted much attention
recently. Most existing work aimed at characterizing the set of parameters such
that it is possible to infer clusters either positively correlated with the
true clusters, or with a vanishing proportion of misclassified items, or
exactly matching the true clusters. We find the set of parameters such that
there exists a clustering algorithm with at most $s$ misclassified items in
average under the general LSBM and for any $s=o(n)$, which solves one open
problem raised in \cite{abbe2015community}. We further develop an algorithm,
based on simple spectral methods, that achieves this fundamental performance
limit within $O(n \mbox{polylog}(n))$ computations and without the a-priori
knowledge of the model parameters.
| Se-Young Yun and Alexandre Proutiere | null | 1510.05956 | null | null |
Stereo Matching by Training a Convolutional Neural Network to Compare
Image Patches | cs.CV cs.LG cs.NE | We present a method for extracting depth information from a rectified image
pair. Our approach focuses on the first stage of many stereo algorithms: the
matching cost computation. We approach the problem by learning a similarity
measure on small image patches using a convolutional neural network. Training
is carried out in a supervised manner by constructing a binary classification
data set with examples of similar and dissimilar pairs of patches. We examine
two network architectures for this task: one tuned for speed, the other for
accuracy. The output of the convolutional neural network is used to initialize
the stereo matching cost. A series of post-processing steps follow: cross-based
cost aggregation, semiglobal matching, a left-right consistency check, subpixel
enhancement, a median filter, and a bilateral filter. We evaluate our method on
the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it
outperforms other approaches on all three data sets.
| Jure \v{Z}bontar and Yann LeCun | null | 1510.05970 | null | null |
Transductive Optimization of Top k Precision | cs.LG | Consider a binary classification problem in which the learner is given a
labeled training set, an unlabeled test set, and is restricted to choosing
exactly $k$ test points to output as positive predictions. Problems of this
kind---{\it transductive precision@$k$}---arise in information retrieval,
digital advertising, and reserve design for endangered species. Previous
methods separate the training of the model from its use in scoring the test
points. This paper introduces a new approach, Transductive Top K (TTK), that
seeks to minimize the hinge loss over all training instances under the
constraint that exactly $k$ test instances are predicted as positive. The paper
presents two optimization methods for this challenging problem. Experiments and
analysis confirm the importance of incorporating the knowledge of $k$ into the
learning process. Experimental evaluations of the TTK approach show that the
performance of TTK matches or exceeds existing state-of-the-art methods on 7
UCI datasets and 3 reserve design problem instances.
| Li-Ping Liu and Thomas G. Dietterich and Nan Li and Zhi-Hua Zhou | null | 1510.05976 | null | null |
Fast and Scalable Structural SVM with Slack Rescaling | cs.LG | We present an efficient method for training slack-rescaled structural SVM.
Although finding the most violating label in a margin-rescaled formulation is
often easy since the target function decomposes with respect to the structure,
this is not the case for a slack-rescaled formulation, and finding the most
violated label might be very difficult. Our core contribution is an efficient
method for finding the most-violating-label in a slack-rescaled formulation,
given an oracle that returns the most-violating-label in a (slightly modified)
margin-rescaled formulation. We show that our method enables accurate and
scalable training for slack-rescaled SVMs, reducing runtime by an order of
magnitude compared to previous approaches to slack-rescaled SVMs.
| Heejin Choi, Ofer Meshi, Nathan Srebro | null | 1510.06002 | null | null |
Robust Semi-Supervised Classification for Multi-Relational Graphs | cs.LG | Graph-regularized semi-supervised learning has been used effectively for
classification when (i) instances are connected through a graph, and (ii)
labeled data is scarce. If available, using multiple relations (or graphs)
between the instances can improve the prediction performance. On the other
hand, when these relations have varying levels of veracity and exhibit varying
relevance for the task, very noisy and/or irrelevant relations may deteriorate
the performance. As a result, an effective weighing scheme needs to be put in
place. In this work, we propose a robust and scalable approach for
multi-relational graph-regularized semi-supervised classification. Under a
convex optimization scheme, we simultaneously infer weights for the multiple
graphs as well as a solution. We provide a careful analysis of the inferred
weights, based on which we devise an algorithm that filters out irrelevant and
noisy graphs and produces weights proportional to the informativeness of the
remaining graphs. Moreover, the proposed method is linearly scalable w.r.t. the
number of edges in the union of the multiple graphs. Through extensive
experiments we show that our method yields superior results under different
noise models, and under increasing number of noisy graphs and intensity of
noise, as compared to a list of baselines and state-of-the-art approaches.
| Junting Ye, Leman Akoglu | null | 1510.06024 | null | null |
Regularization vs. Relaxation: A conic optimization perspective of
statistical variable selection | cs.LG math.NA math.OC stat.ML | Variable selection is a fundamental task in statistical data analysis.
Sparsity-inducing regularization methods are a popular class of methods that
simultaneously perform variable selection and model estimation. The central
problem is a quadratic optimization problem with an l0-norm penalty. Exactly
enforcing the l0-norm penalty is computationally intractable for larger scale
problems, so dif- ferent sparsity-inducing penalty functions that approximate
the l0-norm have been introduced. In this paper, we show that viewing the
problem from a convex relaxation perspective offers new insights. In
particular, we show that a popular sparsity-inducing concave penalty function
known as the Minimax Concave Penalty (MCP), and the reverse Huber penalty
derived in a recent work by Pilanci, Wainwright and Ghaoui, can both be derived
as special cases of a lifted convex relaxation called the perspective
relaxation. The optimal perspective relaxation is a related minimax problem
that balances the overall convexity and tightness of approximation to the l0
norm. We show it can be solved by a semidefinite relaxation. Moreover, a
probabilistic interpretation of the semidefinite relaxation reveals connections
with the boolean quadric polytope in combinatorial optimization. Finally by
reformulating the l0-norm pe- nalized problem as a two-level problem, with the
inner level being a Max-Cut problem, our proposed semidefinite relaxation can
be realized by replacing the inner level problem with its semidefinite
relaxation studied by Goemans and Williamson. This interpretation suggests
using the Goemans-Williamson rounding procedure to find approximate solutions
to the l0-norm penalized problem. Numerical experiments demonstrate the
tightness of our proposed semidefinite relaxation, and the effectiveness of
finding approximate solutions by Goemans-Williamson rounding.
| Hongbo Dong and Kun Chen and Jeff Linderoth | null | 1510.06083 | null | null |
High Performance Latent Variable Models | cs.LG cs.AI | Latent variable models have accumulated a considerable amount of interest
from the industry and academia for their versatility in a wide range of
applications. A large amount of effort has been made to develop systems that is
able to extend the systems to a large scale, in the hope to make use of them on
industry scale data. In this paper, we describe a system that operates at a
scale orders of magnitude higher than previous works, and an order of magnitude
faster than state-of-the-art system at the same scale, at the same time showing
more robustness and more accurate results.
Our system uses a number of advances in distributed inference: high
performance in synchronization of sufficient statistics with relaxed
consistency model; fast sampling, using the Metropolis-Hastings-Walker method
to overcome dense generative models; statistical modeling, moving beyond Latent
Dirichlet Allocation (LDA) to Pitman-Yor distributions (PDP) and Hierarchical
Dirichlet Process (HDP) models; sophisticated parameter projection schemes, to
resolve the conflicts within the constraint between parameters arising from the
relaxed consistency model.
This work significantly extends the domain of applicability of what is
commonly known as the Parameter Server. We obtain results with up to hundreds
billion oftokens, thousands of topics, and a vocabulary of a few million
token-types, using up to 60,000 processor cores operating on a production
cluster of a large Internet company. This demonstrates the feasibility to scale
to problems orders of magnitude larger than any previously published work.
| Aaron Q. Li, Amr Ahmed, Mu Li, Vanja Josifovski | null | 1510.06143 | null | null |
Learning-based Compressive Subsampling | cs.IT cs.LG math.IT stat.ML | The problem of recovering a structured signal $\mathbf{x} \in \mathbb{C}^p$
from a set of dimensionality-reduced linear measurements $\mathbf{b} = \mathbf
{A}\mathbf {x}$ arises in a variety of applications, such as medical imaging,
spectroscopy, Fourier optics, and computerized tomography. Due to computational
and storage complexity or physical constraints imposed by the problem, the
measurement matrix $\mathbf{A} \in \mathbb{C}^{n \times p}$ is often of the
form $\mathbf{A} = \mathbf{P}_{\Omega}\boldsymbol{\Psi}$ for some orthonormal
basis matrix $\boldsymbol{\Psi}\in \mathbb{C}^{p \times p}$ and subsampling
operator $\mathbf{P}_{\Omega}: \mathbb{C}^{p} \rightarrow \mathbb{C}^{n}$ that
selects the rows indexed by $\Omega$. This raises the fundamental question of
how best to choose the index set $\Omega$ in order to optimize the recovery
performance. Previous approaches to addressing this question rely on
non-uniform \emph{random} subsampling using application-specific knowledge of
the structure of $\mathbf{x}$. In this paper, we instead take a principled
learning-based approach in which a \emph{fixed} index set is chosen based on a
set of training signals $\mathbf{x}_1,\dotsc,\mathbf{x}_m$. We formulate
combinatorial optimization problems seeking to maximize the energy captured in
these signals in an average-case or worst-case sense, and we show that these
can be efficiently solved either exactly or approximately via the
identification of modularity and submodularity structures. We provide both
deterministic and statistical theoretical guarantees showing how the resulting
measurement matrices perform on signals differing from the training signals,
and we provide numerical examples showing our approach to be effective on a
variety of data sets.
| Luca Baldassarre and Yen-Huan Li and Jonathan Scarlett and Baran
G\"ozc\"u and Ilija Bogunovic and Volkan Cevher | 10.1109/JSTSP.2016.2548442 | 1510.06188 | null | null |
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing
Systems | cs.AI cs.LG | Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods.
| Matteo Venanzi, John Guiver, Pushmeet Kohli, Nick Jennings | null | 1510.06335 | null | null |
Application of Quantum Annealing to Training of Deep Neural Networks | quant-ph cs.LG stat.ML | In Deep Learning, a well-known approach for training a Deep Neural Network
starts by training a generative Deep Belief Network model, typically using
Contrastive Divergence (CD), then fine-tuning the weights using backpropagation
or other discriminative techniques. However, the generative training can be
time-consuming due to the slow mixing of Gibbs sampling. We investigated an
alternative approach that estimates model expectations of Restricted Boltzmann
Machines using samples from a D-Wave quantum annealing machine. We tested this
method on a coarse-grained version of the MNIST data set. In our tests we found
that the quantum sampling-based training approach achieves comparable or better
accuracy with significantly fewer iterations of generative training than
conventional CD-based training. Further investigation is needed to determine
whether similar improvements can be achieved for other data sets, and to what
extent these improvements can be attributed to quantum effects.
| Steven H. Adachi and Maxwell P. Henderson | null | 1510.06356 | null | null |
Optimization as Estimation with Gaussian Processes in Bandit Settings | stat.ML cs.LG | Recently, there has been rising interest in Bayesian optimization -- the
optimization of an unknown function with assumptions usually expressed by a
Gaussian Process (GP) prior. We study an optimization strategy that directly
uses an estimate of the argmax of the function. This strategy offers both
practical and theoretical advantages: no tradeoff parameter needs to be
selected, and, moreover, we establish close connections to the popular GP-UCB
and GP-PI strategies. Our approach can be understood as automatically and
adaptively trading off exploration and exploitation in GP-UCB and GP-PI. We
illustrate the effects of this adaptive tuning via bounds on the regret as well
as an extensive empirical evaluation on robotics and vision tasks,
demonstrating the robustness of this strategy for a range of performance
criteria.
| Zi Wang, Bolei Zhou, Stefanie Jegelka | null | 1510.06423 | null | null |
Generalized Shortest Path Kernel on Graphs | cs.DS cs.LG | We consider the problem of classifying graphs using graph kernels. We define
a new graph kernel, called the generalized shortest path kernel, based on the
number and length of shortest paths between nodes. For our example
classification problem, we consider the task of classifying random graphs from
two well-known families, by the number of clusters they contain. We verify
empirically that the generalized shortest path kernel outperforms the original
shortest path kernel on a number of datasets. We give a theoretical analysis
for explaining our experimental results. In particular, we estimate
distributions of the expected feature vectors for the shortest path kernel and
the generalized shortest path kernel, and we show some evidence explaining why
our graph kernel outperforms the shortest path kernel for our graph
classification problem.
| Linus Hermansson, Fredrik D. Johansson, and Osamu Watanabe | null | 1510.06492 | null | null |
Multi-GPU Distributed Parallel Bayesian Differential Topic Modelling | cs.CL cs.DC cs.LG | There is an explosion of data, documents, and other content, and people
require tools to analyze and interpret these, tools to turn the content into
information and knowledge. Topic modeling have been developed to solve these
problems. Topic models such as LDA [Blei et. al. 2003] allow salient patterns
in data to be extracted automatically. When analyzing texts, these patterns are
called topics. Among numerous extensions of LDA, few of them can reliably
analyze multiple groups of documents and extract topic similarities. Recently,
the introduction of differential topic modeling (SPDP) [Chen et. al. 2012]
performs uniformly better than many topic models in a discriminative setting.
There is also a need to improve the sampling speed for topic models. While
some effort has been made for distributed algorithms, there is no work
currently done using graphical processing units (GPU). Note the GPU framework
has already become the most cost-efficient platform for many problems.
In this thesis, I propose and implement a scalable multi-GPU distributed
parallel framework which approximates SPDP. Through experiments, I have shown
my algorithms have a gain in speed of about 50 times while being almost as
accurate, with only one single cheap laptop GPU. Furthermore, I have shown the
speed improvement is sublinearly scalable when multiple GPUs are used, while
fairly maintaining the accuracy. Therefore on a medium-sized GPU cluster, the
speed improvement could potentially reach a factor of a thousand.
Note SPDP is just a representative of other extensions of LDA. Although my
algorithm is implemented to work with SPDP, it is designed to be a general
enough to work with other topic models. The speed-up on smaller collections
(i.e., 1000s of documents), means that these more complex LDA extensions could
now be done in real-time, thus opening up a new way of using these LDA models
in industry.
| Aaron Q Li | null | 1510.06549 | null | null |
Generalized conditional gradient: analysis of convergence and
applications | cs.LG math.OC stat.ML | The objectives of this technical report is to provide additional results on
the generalized conditional gradient methods introduced by Bredies et al.
[BLM05]. Indeed , when the objective function is smooth, we provide a novel
certificate of optimality and we show that the algorithm has a linear
convergence rate. Applications of this algorithm are also discussed.
| Alain Rakotomamonjy (LITIS), R\'emi Flamary (LAGRANGE, OCA), Nicolas
Courty (OBELIX) | null | 1510.06567 | null | null |
Collective Prediction of Individual Mobility Traces with Exponential
Weights | physics.soc-ph cs.CY cs.LG stat.ML | We present and test a sequential learning algorithm for the short-term
prediction of human mobility. This novel approach pairs the Exponential Weights
forecaster with a very large ensemble of experts. The experts are individual
sequence prediction algorithms constructed from the mobility traces of 10
million roaming mobile phone users in a European country. Average prediction
accuracy is significantly higher than that of individual sequence prediction
algorithms, namely constant order Markov models derived from the user's own
data, that have been shown to achieve high accuracy in previous studies of
human mobility prediction. The algorithm uses only time stamped location data,
and accuracy depends on the completeness of the expert ensemble, which should
contain redundant records of typical mobility patterns. The proposed algorithm
is applicable to the prediction of any sufficiently large dataset of sequences.
| Bartosz Hawelka, Izabela Sitko, Pavlos Kazakopoulos and Euro Beinat | null | 1510.06582 | null | null |
A 'Gibbs-Newton' Technique for Enhanced Inference of Multivariate Polya
Parameters and Topic Models | cs.LG cs.CL stat.ML | Hyper-parameters play a major role in the learning and inference process of
latent Dirichlet allocation (LDA). In order to begin the LDA latent variables
learning process, these hyper-parameters values need to be pre-determined. We
propose an extension for LDA that we call 'Latent Dirichlet allocation Gibbs
Newton' (LDA-GN), which places non-informative priors over these
hyper-parameters and uses Gibbs sampling to learn appropriate values for them.
At the heart of LDA-GN is our proposed 'Gibbs-Newton' algorithm, which is a new
technique for learning the parameters of multivariate Polya distributions. We
report Gibbs-Newton performance results compared with two prominent existing
approaches to the latter task: Minka's fixed-point iteration method and the
Moments method. We then evaluate LDA-GN in two ways: (i) by comparing it with
standard LDA in terms of the ability of the resulting topic models to
generalize to unseen documents; (ii) by comparing it with standard LDA in its
performance on a binary classification task.
| Osama Khalifa, David Wolfe Corne, Mike Chantler | null | 1510.06646 | null | null |
Random Projections through multiple optical scattering: Approximating
kernels at the speed of light | cs.ET cs.LG physics.optics | Random projections have proven extremely useful in many signal processing and
machine learning applications. However, they often require either to store a
very large random matrix, or to use a different, structured matrix to reduce
the computational and memory costs. Here, we overcome this difficulty by
proposing an analog, optical device, that performs the random projections
literally at the speed of light without having to store any matrix in memory.
This is achieved using the physical properties of multiple coherent scattering
of coherent light in random media. We use this device on a simple task of
classification with a kernel machine, and we show that, on the MNIST database,
the experimental results closely match the theoretical performance of the
corresponding kernel. This framework can help make kernel methods practical for
applications that have large training sets and/or require real-time prediction.
We discuss possible extensions of the method in terms of a class of kernels,
speed, memory consumption and different problems.
| Alaa Saade, Francesco Caltagirone, Igor Carron, Laurent Daudet,
Ang\'elique Dr\'emeau, Sylvain Gigan and Florent Krzakala | 10.1109/ICASSP.2016.7472872 | 1510.06664 | null | null |
Dual Free Adaptive Mini-batch SDCA for Empirical Risk Minimization | math.OC cs.LG | In this paper we develop dual free mini-batch SDCA with adaptive
probabilities for regularized empirical risk minimization. This work is
motivated by recent work of Shai Shalev-Shwartz on dual free SDCA method,
however, we allow a non-uniform selection of "dual" coordinates in SDCA.
Moreover, the probability can change over time, making it more efficient than
fix uniform or non-uniform selection. We also propose an efficient procedure to
generate a random non-uniform mini-batch through iterative process. The work is
concluded with multiple numerical experiments to show the efficiency of
proposed algorithms.
| Xi He and Martin Tak\'a\v{c} | null | 1510.06684 | null | null |
Partitioning Data on Features or Samples in Communication-Efficient
Distributed Optimization? | math.OC cs.LG | In this paper we study the effect of the way that the data is partitioned in
distributed optimization. The original DiSCO algorithm [Communication-Efficient
Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and
Lin Xiao, 2015] partitions the input data based on samples. We describe how the
original algorithm has to be modified to allow partitioning on features and
show its efficiency both in theory and also in practice.
| Chenxin Ma and Martin Tak\'a\v{c} | null | 1510.06688 | null | null |
ZNN - A Fast and Scalable Algorithm for Training 3D Convolutional
Networks on Multi-Core and Many-Core Shared Memory Machines | cs.NE cs.CV cs.DC cs.LG | Convolutional networks (ConvNets) have become a popular approach to computer
vision. It is important to accelerate ConvNet training, which is
computationally costly. We propose a novel parallel algorithm based on
decomposition into a set of tasks, most of which are convolutions or FFTs.
Applying Brent's theorem to the task dependency graph implies that linear
speedup with the number of processors is attainable within the PRAM model of
parallel computation, for wide network architectures. To attain such
performance on real shared-memory machines, our algorithm computes convolutions
converging on the same node of the network with temporal locality to reduce
cache misses, and sums the convergent convolution outputs via an almost
wait-free concurrent method to reduce time spent in critical sections. We
implement the algorithm with a publicly available software package called ZNN.
Benchmarking with multi-core CPUs shows that ZNN can attain speedup roughly
equal to the number of physical cores. We also show that ZNN can attain over
90x speedup on a many-core CPU (Xeon Phi Knights Corner). These speedups are
achieved for network architectures with widths that are in common use. The task
parallelism of the ZNN algorithm is suited to CPUs, while the SIMD parallelism
of previous algorithms is compatible with GPUs. Through examples, we show that
ZNN can be either faster or slower than certain GPU implementations depending
on specifics of the network architecture, kernel sizes, and density and size of
the output patch. ZNN may be less costly to develop and maintain, due to the
relative ease of general-purpose CPU programming.
| Aleksandar Zlateski, Kisuk Lee and H. Sebastian Seung | 10.1109/IPDPS.2016.119 | 1510.06706 | null | null |
Freshman or Fresher? Quantifying the Geographic Variation of Internet
Language | cs.CL cs.IR cs.LG | We present a new computational technique to detect and analyze statistically
significant geographic variation in language. Our meta-analysis approach
captures statistical properties of word usage across geographical regions and
uses statistical methods to identify significant changes specific to regions.
While previous approaches have primarily focused on lexical variation between
regions, our method identifies words that demonstrate semantic and syntactic
variation as well.
We extend recently developed techniques for neural language models to learn
word representations which capture differing semantics across geographical
regions. In order to quantify this variation and ensure robust detection of
true regional differences, we formulate a null model to determine whether
observed changes are statistically significant. Our method is the first such
approach to explicitly account for random variation due to chance while
detecting regional variation in word meaning.
To validate our model, we study and analyze two different massive online data
sets: millions of tweets from Twitter spanning not only four different
countries but also fifty states, as well as millions of phrases contained in
the Google Book Ngrams. Our analysis reveals interesting facets of language
change at multiple scales of geographic resolution -- from neighboring states
to distant continents.
Finally, using our model, we propose a measure of semantic distance between
languages. Our analysis of British and American English over a period of 100
years reveals that semantic variation between these dialects is shrinking.
| Vivek Kulkarni, Bryan Perozzi, Steven Skiena | null | 1510.06786 | null | null |
Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted
Nuclear Norm | cs.LG cs.CV cs.NA | The nuclear norm is widely used as a convex surrogate of the rank function in
compressive sensing for low rank matrix recovery with its applications in image
recovery and signal processing. However, solving the nuclear norm based relaxed
convex problem usually leads to a suboptimal solution of the original rank
minimization problem. In this paper, we propose to perform a family of
nonconvex surrogates of $L_0$-norm on the singular values of a matrix to
approximate the rank function. This leads to a nonconvex nonsmooth minimization
problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear
Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value
Thresholding (WSVT) problem, which has a closed form solution due to the
special properties of the nonconvex surrogate functions. We also extend IRNN to
solve the nonconvex problem with two or more blocks of variables. In theory, we
prove that IRNN decreases the objective function value monotonically, and any
limit point is a stationary point. Extensive experiments on both synthesized
data and real images demonstrate that IRNN enhances the low-rank matrix
recovery compared with state-of-the-art convex algorithms.
| Canyi Lu, Jinhui Tang, Shuicheng Yan, Zhouchen Lin | 10.1109/TIP.2015.2511584 | 1510.06895 | null | null |
On the complexity of switching linear regression | stat.ML cs.CC cs.LG | This technical note extends recent results on the computational complexity of
globally minimizing the error of piecewise-affine models to the related problem
of minimizing the error of switching linear regression models. In particular,
we show that, on the one hand the problem is NP-hard, but on the other hand, it
admits a polynomial-time algorithm with respect to the number of data points
for any fixed data dimension and number of modes.
| Fabien Lauer (ABC) | null | 1510.06920 | null | null |
Modeling User Exposure in Recommendation | stat.ML cs.IR cs.LG | Collaborative filtering analyzes user preferences for items (e.g., books,
movies, restaurants, academic papers) by exploiting the similarity patterns
across users. In implicit feedback settings, all the items, including the ones
that a user did not consume, are taken into consideration. But this assumption
does not accord with the common sense understanding that users have a limited
scope and awareness of items. For example, a user might not have heard of a
certain paper, or might live too far away from a restaurant to experience it.
In the language of causal analysis, the assignment mechanism (i.e., the items
that a user is exposed to) is a latent variable that may change for various
user/item combinations. In this paper, we propose a new probabilistic approach
that directly incorporates user exposure to items into collaborative filtering.
The exposure is modeled as a latent variable and the model infers its value
from data. In doing so, we recover one of the most successful state-of-the-art
approaches as a special case of our model, and provide a plug-in method for
conditioning exposure on various forms of exposure covariates (e.g., topics in
text, venue locations). We show that our scalable inference algorithm
outperforms existing benchmarks in four different domains both with and without
exposure covariates.
| Dawen Liang, Laurent Charlin, James McInerney, David M. Blei | null | 1510.07025 | null | null |
Fast Latent Variable Models for Inference and Visualization on Mobile
Devices | cs.LG cs.CL cs.DC cs.IR | In this project we outline Vedalia, a high performance distributed network
for performing inference on latent variable models in the context of Amazon
review visualization. We introduce a new model, RLDA, which extends Latent
Dirichlet Allocation (LDA) [Blei et al., 2003] for the review space by
incorporating auxiliary data available in online reviews to improve modeling
while simultaneously remaining compatible with pre-existing fast sampling
techniques such as [Yao et al., 2009; Li et al., 2014a] to achieve high
performance. The network is designed such that computation is efficiently
offloaded to the client devices using the Chital system [Robinson & Li, 2015],
improving response times and reducing server costs. The resulting system is
able to rapidly compute a large number of specialized latent variable models
while requiring minimal server resources.
| Joseph W Robinson, Aaron Q Li | null | 1510.07035 | null | null |
Data-driven detrending of nonstationary fractal time series with echo
state networks | physics.data-an cs.LG cs.NE | In this paper, we propose a novel data-driven approach for removing trends
(detrending) from nonstationary, fractal and multifractal time series. We
consider real-valued time series relative to measurements of an underlying
dynamical system that evolves through time. We assume that such a dynamical
process is predictable to a certain degree by means of a class of recurrent
networks called Echo State Network (ESN), which are capable to model a generic
dynamical process. In order to isolate the superimposed (multi)fractal
component of interest, we define a data-driven filter by leveraging on the ESN
prediction capability to identify the trend component of a given input time
series. Specifically, the (estimated) trend is removed from the original time
series and the residual signal is analyzed with the multifractal detrended
fluctuation analysis procedure to verify the correctness of the detrending
procedure. In order to demonstrate the effectiveness of the proposed technique,
we consider several synthetic time series consisting of different types of
trends and fractal noise components with known characteristics. We also process
a real-world dataset, the sunspot time series, which is well-known for its
multifractal features and has recently gained attention in the complex systems
field. Results demonstrate the validity and generality of the proposed
detrending method based on ESNs.
| Enrico Maiorino, Filippo Maria Bianchi, Lorenzo Livi, Antonello Rizzi,
Alireza Sadeghian | 10.1016/j.ins.2016.12.015 | 1510.07146 | null | null |
Fast and Scalable Lasso via Stochastic Frank-Wolfe Methods with a
Convergence Guarantee | stat.ML cs.LG math.OC | Frank-Wolfe (FW) algorithms have been often proposed over the last few years
as efficient solvers for a variety of optimization problems arising in the
field of Machine Learning. The ability to work with cheap projection-free
iterations and the incremental nature of the method make FW a very effective
choice for many large-scale problems where computing a sparse model is
desirable.
In this paper, we present a high-performance implementation of the FW method
tailored to solve large-scale Lasso regression problems, based on a randomized
iteration, and prove that the convergence guarantees of the standard FW method
are preserved in the stochastic setting. We show experimentally that our
algorithm outperforms several existing state of the art methods, including the
Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso
solvers), on several benchmark datasets with a very large number of features,
without sacrificing the accuracy of the model. Our results illustrate that the
algorithm is able to generate the complete regularization path on problems of
size up to four million variables in less than one minute.
| Emanuele Frandi, Ricardo Nanculef, Stefano Lodi, Claudio Sartori,
Johan A. K. Suykens | null | 1510.07169 | null | null |
Vehicle Speed Prediction using Deep Learning | cs.LG cs.NE | Global optimization of the energy consumption of dual power source vehicles
such as hybrid electric vehicles, plug-in hybrid electric vehicles, and plug in
fuel cell electric vehicles requires knowledge of the complete route
characteristics at the beginning of the trip. One of the main characteristics
is the vehicle speed profile across the route. The profile will translate
directly into energy requirements for a given vehicle. However, the vehicle
speed that a given driver chooses will vary from driver to driver and from time
to time, and may be slower, equal to, or faster than the average traffic flow.
If the specific driver speed profile can be predicted, the energy usage can be
optimized across the route chosen. The purpose of this paper is to research the
application of Deep Learning techniques to this problem to identify at the
beginning of a drive cycle the driver specific vehicle speed profile for an
individual driver repeated drive cycle, which can be used in an optimization
algorithm to minimize the amount of fossil fuel energy used during the trip.
| Joe Lemieux, Yuan Ma | null | 1510.07208 | null | null |
On End-to-End Program Generation from User Intention by Deep Neural
Networks | cs.SE cs.LG | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice.
| Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | null | 1510.07211 | null | null |
A Framework for Distributed Deep Learning Layer Design in Python | cs.LG | In this paper, a framework for testing Deep Neural Network (DNN) design in
Python is presented. First, big data, machine learning (ML), and Artificial
Neural Networks (ANNs) are discussed to familiarize the reader with the
importance of such a system. Next, the benefits and detriments of implementing
such a system in Python are presented. Lastly, the specifics of the system are
explained, and some experimental results are presented to prove the
effectiveness of the system.
| Clay McLeod | null | 1510.07303 | null | null |
The Human Kernel | cs.LG cs.AI stat.ML | Bayesian nonparametric models, such as Gaussian processes, provide a
compelling framework for automatic statistical modelling: these models have a
high degree of flexibility, and automatically calibrated complexity. However,
automating human expertise remains elusive; for example, Gaussian processes
with standard kernels struggle on function extrapolation problems that are
trivial for human learners. In this paper, we create function extrapolation
problems and acquire human responses, and then design a kernel learning
framework to reverse engineer the inductive biases of human learners across a
set of behavioral experiments. We use the learned kernels to gain psychological
insights and to extrapolate in human-like ways that go beyond traditional
stationary and polynomial kernels. Finally, we investigate Occam's razor in
human and Gaussian process based function learning.
| Andrew Gordon Wilson, Christoph Dann, Christopher G. Lucas, Eric P.
Xing | null | 1510.07389 | null | null |
Empirical Study on Deep Learning Models for Question Answering | cs.CL cs.AI cs.LG | In this paper we explore deep learning models with memory component or
attention mechanism for question answering task. We combine and compare three
models, Neural Machine Translation, Neural Turing Machine, and Memory Networks
for a simulated QA data set. This paper is the first one that uses Neural
Machine Translation and Neural Turing Machines for solving QA tasks. Our
results suggest that the combination of attention and memory have potential to
solve certain QA problem.
| Yang Yu, Wei Zhang, Chung-Wei Hang, Bing Xiang and Bowen Zhou | null | 1510.07526 | null | null |
Using Shortlists to Support Decision Making and Improve Recommender
System Performance | cs.HC cs.IR cs.LG | In this paper, we study shortlists as an interface component for recommender
systems with the dual goal of supporting the user's decision process, as well
as improving implicit feedback elicitation for increased recommendation
quality. A shortlist is a temporary list of candidates that the user is
currently considering, e.g., a list of a few movies the user is currently
considering for viewing. From a cognitive perspective, shortlists serve as
digital short-term memory where users can off-load the items under
consideration -- thereby decreasing their cognitive load. From a machine
learning perspective, adding items to the shortlist generates a new implicit
feedback signal as a by-product of exploration and decision making which can
improve recommendation quality. Shortlisting therefore provides additional data
for training recommendation systems without the increases in cognitive load
that requesting explicit feedback would incur.
We perform an user study with a movie recommendation setup to compare
interfaces that offer shortlist support with those that do not. From the user
studies we conclude: (i) users make better decisions with a shortlist; (ii)
users prefer an interface with shortlist support; and (iii) the additional
implicit feedback from sessions with a shortlist improves the quality of
recommendations by nearly a factor of two.
| Tobias Schnabel, Paul N. Bennett, Susan T. Dumais and Thorsten
Joachims | null | 1510.07545 | null | null |
Efficient Learning by Directed Acyclic Graph For Resource Constrained
Prediction | stat.ML cs.LG | We study the problem of reducing test-time acquisition costs in
classification systems. Our goal is to learn decision rules that adaptively
select sensors for each example as necessary to make a confident prediction. We
model our system as a directed acyclic graph (DAG) where internal nodes
correspond to sensor subsets and decision functions at each node choose whether
to acquire a new sensor or classify using the available measurements. This
problem can be naturally posed as an empirical risk minimization over training
data. Rather than jointly optimizing such a highly coupled and non-convex
problem over all decision nodes, we propose an efficient algorithm motivated by
dynamic programming. We learn node policies in the DAG by reducing the global
objective to a series of cost sensitive learning problems. Our approach is
computationally efficient and has proven guarantees of convergence to the
optimal system for a fixed architecture. In addition, we present an extension
to map other budgeted learning problems with large number of sensors to our DAG
architecture and demonstrate empirical performance exceeding state-of-the-art
algorithms for data composed of both few and many sensors.
| Joseph Wang, Kirill Trapeznikov, Venkatesh Saligrama | null | 1510.07609 | null | null |
Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks | cs.LG | We present a novel application of LSTM recurrent neural networks to
multilabel classification of diagnoses given variable-length time series of
clinical measurements. Our method outperforms a strong baseline on a variety of
metrics.
| Zachary C. Lipton, David C. Kale, Randall C. Wetzel | null | 1510.07641 | null | null |
Statistically efficient thinning of a Markov chain sampler | stat.CO cs.LG stat.ML | It is common to subsample Markov chain output to reduce the storage burden.
Geyer (1992) shows that discarding $k-1$ out of every $k$ observations will not
improve statistical efficiency, as quantified through variance in a given
computational budget. That observation is often taken to mean that thinning
MCMC output cannot improve statistical efficiency. Here we suppose that it
costs one unit of time to advance a Markov chain and then $\theta>0$ units of
time to compute a sampled quantity of interest. For a thinned process, that
cost $\theta$ is incurred less often, so it can be advanced through more
stages. Here we provide examples to show that thinning will improve statistical
efficiency if $\theta$ is large and the sample autocorrelations decay slowly
enough. If the lag $\ell\ge1$ autocorrelations of a scalar measurement satisfy
$\rho_\ell\ge\rho_{\ell+1}\ge0$, then there is always a $\theta<\infty$ at
which thinning becomes more efficient for averages of that scalar. Many sample
autocorrelation functions resemble first order AR(1) processes with $\rho_\ell
=\rho^{|\ell|}$ for some $-1<\rho<1$. For an AR(1) process it is possible to
compute the most efficient subsampling frequency $k$. The optimal $k$ grows
rapidly as $\rho$ increases towards $1$. The resulting efficiency gain depends
primarily on $\theta$, not $\rho$. Taking $k=1$ (no thinning) is optimal when
$\rho\le0$. For $\rho>0$ it is optimal if and only if $\theta \le
(1-\rho)^2/(2\rho)$. This efficiency gain never exceeds $1+\theta$. This paper
also gives efficiency bounds for autocorrelations bounded between those of two
AR(1) processes.
| Art B. Owen | null | 1510.07727 | null | null |
The Wilson Machine for Image Modeling | stat.ML cond-mat.stat-mech cs.CV cs.LG | Learning the distribution of natural images is one of the hardest and most
important problems in machine learning. The problem remains open, because the
enormous complexity of the structures in natural images spans all length
scales. We break down the complexity of the problem and show that the hierarchy
of structures in natural images fuels a new class of learning algorithms based
on the theory of critical phenomena and stochastic processes. We approach this
problem from the perspective of the theory of critical phenomena, which was
developed in condensed matter physics to address problems with infinite
length-scale fluctuations, and build a framework to integrate the criticality
of natural images into a learning algorithm. The problem is broken down by
mapping images into a hierarchy of binary images, called bitplanes. In this
representation, the top bitplane is critical, having fluctuations in structures
over a vast range of scales. The bitplanes below go through a gradual
stochastic heating process to disorder. We turn this representation into a
directed probabilistic graphical model, transforming the learning problem into
the unsupervised learning of the distribution of the critical bitplane and the
supervised learning of the conditional distributions for the remaining
bitplanes. We learnt the conditional distributions by logistic regression in a
convolutional architecture. Conditioned on the critical binary image, this
simple architecture can generate large, natural-looking images, with many
shades of gray, without the use of hidden units, unprecedented in the studies
of natural images. The framework presented here is a major step in bringing
criticality and stochastic processes to machine learning and in studying
natural image statistics.
| Saeed Saremi, Terrence J. Sejnowski | null | 1510.07740 | null | null |
Exclusive Sparsity Norm Minimization with Random Groups via Cone
Projection | stat.ML cs.LG | Many practical applications such as gene expression analysis, multi-task
learning, image recognition, signal processing, and medical data analysis
pursue a sparse solution for the feature selection purpose and particularly
favor the nonzeros \emph{evenly} distributed in different groups. The exclusive
sparsity norm has been widely used to serve to this purpose. However, it still
lacks systematical studies for exclusive sparsity norm optimization. This paper
offers two main contributions from the optimization perspective: 1) We provide
several efficient algorithms to solve exclusive sparsity norm minimization with
either smooth loss or hinge loss (non-smooth loss). All algorithms achieve the
optimal convergence rate $O(1/k^2)$ ($k$ is the iteration number). To the best
of our knowledge, this is the first time to guarantee such convergence rate for
the general exclusive sparsity norm minimization; 2) When the group information
is unavailable to define the exclusive sparsity norm, we propose to use the
random grouping scheme to construct groups and prove that if the number of
groups is appropriately chosen, the nonzeros (true features) would be grouped
in the ideal way with high probability. Empirical studies validate the
efficiency of proposed algorithms, and the effectiveness of random grouping
scheme on the proposed exclusive SVM formulation.
| Yijun Huang and Ji Liu | null | 1510.07925 | null | null |
Online Learning with Gaussian Payoffs and Side Observations | stat.ML cs.LG | We consider a sequential learning problem with Gaussian payoffs and side
information: after selecting an action $i$, the learner receives information
about the payoff of every action $j$ in the form of Gaussian observations whose
mean is the same as the mean payoff, but the variance depends on the pair
$(i,j)$ (and may be infinite). The setup allows a more refined information
transfer from one action to another than previous partial monitoring setups,
including the recently introduced graph-structured feedback case. For the first
time in the literature, we provide non-asymptotic problem-dependent lower
bounds on the regret of any algorithm, which recover existing asymptotic
problem-dependent lower bounds and finite-time minimax lower bounds available
in the literature. We also provide algorithms that achieve the
problem-dependent lower bound (up to some universal constant factor) or the
minimax lower bounds (up to logarithmic factors).
| Yifan Wu, Andr\'as Gy\"orgy, Csaba Szepesv\'ari | null | 1510.08108 | null | null |
Operator-valued Kernels for Learning from Functional Response Data | cs.LG stat.ML | In this paper we consider the problems of supervised classification and
regression in the case where attributes and labels are functions: a data is
represented by a set of functions, and the label is also a function. We focus
on the use of reproducing kernel Hilbert space theory to learn from such
functional data. Basic concepts and properties of kernel-based learning are
extended to include the estimation of function-valued functions. In this
setting, the representer theorem is restated, a set of rigorously defined
infinite-dimensional operator-valued kernels that can be valuably applied when
the data are functions is described, and a learning algorithm for nonlinear
functional data analysis is introduced. The methodology is illustrated through
speech and audio signal processing experiments.
| Hachem Kadri (LIF), Emmanuel Duflos (CRIStAL), Philippe Preux
(CRIStAL, SEQUEL), St\'ephane Canu (LITIS), Alain Rakotomamonjy (LITIS),
Julien Audiffren (CMLA) | null | 1510.08231 | null | null |
Canonical Divergence Analysis | stat.ML cs.LG | We aim to analyze the relation between two random vectors that may
potentially have both different number of attributes as well as realizations,
and which may even not have a joint distribution. This problem arises in many
practical domains, including biology and architecture. Existing techniques
assume the vectors to have the same domain or to be jointly distributed, and
hence are not applicable. To address this, we propose Canonical Divergence
Analysis (CDA). We introduce three instantiations, each of which permits
practical implementation. Extensive empirical evaluation shows the potential of
our method.
| Hoang-Vu Nguyen and Jilles Vreeken | null | 1510.08370 | null | null |
Flexibly Mining Better Subgroups | stat.ML cs.LG | In subgroup discovery, also known as supervised pattern mining, discovering
high quality one-dimensional subgroups and refinements of these is a crucial
task. For nominal attributes, this is relatively straightforward, as we can
consider individual attribute values as binary features. For numerical
attributes, the task is more challenging as individual numeric values are not
reliable statistics. Instead, we can consider combinations of adjacent values,
i.e. bins. Existing binning strategies, however, are not tailored for subgroup
discovery. That is, they do not directly optimize for the quality of subgroups,
therewith potentially degrading the mining result.
To address this issue, we propose FLEXI. In short, with FLEXI we propose to
use optimal binning to find high quality binary features for both numeric and
ordinal attributes. We instantiate FLEXI with various quality measures and show
how to achieve efficiency accordingly. Experiments on both synthetic and
real-world data sets show that FLEXI outperforms state of the art with up to 25
times improvement in subgroup quality.
| Hoang-Vu Nguyen and Jilles Vreeken | null | 1510.08382 | null | null |
Linear-time Detection of Non-linear Changes in Massively High
Dimensional Time Series | stat.ML cs.LG | Change detection in multivariate time series has applications in many
domains, including health care and network monitoring. A common approach to
detect changes is to compare the divergence between the distributions of a
reference window and a test window. When the number of dimensions is very
large, however, the naive approach has both quality and efficiency issues: to
ensure robustness the window size needs to be large, which not only leads to
missed alarms but also increases runtime.
To this end, we propose LIGHT, a linear-time algorithm for robustly detecting
non-linear changes in massively high dimensional time series. Importantly,
LIGHT provides high flexibility in choosing the window size, allowing the
domain expert to fit the level of details required. To do such, we 1) perform
scalable PCA to reduce dimensionality, 2) perform scalable factorization of the
joint distribution, and 3) scalably compute divergences between these lower
dimensional distributions. Extensive empirical evaluation on both synthetic and
real-world data show that LIGHT outperforms state of the art with up to 100%
improvement in both quality and efficiency.
| Hoang-Vu Nguyen and Jilles Vreeken | null | 1510.08385 | null | null |
Universal Dependency Analysis | stat.ML cs.LG | Most data is multi-dimensional. Discovering whether any subset of dimensions,
or subspaces, of such data is significantly correlated is a core task in data
mining. To do so, we require a measure that quantifies how correlated a
subspace is. For practical use, such a measure should be universal in the sense
that it captures correlation in subspaces of any dimensionality and allows to
meaningfully compare correlation scores across different subspaces, regardless
how many dimensions they have and what specific statistical properties their
dimensions possess. Further, it would be nice if the measure can
non-parametrically and efficiently capture both linear and non-linear
correlations.
In this paper, we propose UDS, a multivariate correlation measure that
fulfills all of these desiderata. In short, we define \uds based on cumulative
entropy and propose a principled normalization scheme to bring its scores
across different subspaces to the same domain, enabling universal correlation
assessment. UDS is purely non-parametric as we make no assumption on data
distributions nor types of correlation. To compute it on empirical data, we
introduce an efficient and non-parametric method. Extensive experiments show
that UDS outperforms state of the art.
| Hoang-Vu Nguyen and Jilles Vreeken | null | 1510.08389 | null | null |
Learning with $\ell^{0}$-Graph: $\ell^{0}$-Induced Sparse Subspace
Clustering | cs.LG cs.CV | Sparse subspace clustering methods, such as Sparse Subspace Clustering (SSC)
\cite{ElhamifarV13} and $\ell^{1}$-graph \cite{YanW09,ChengYYFH10}, are
effective in partitioning the data that lie in a union of subspaces. Most of
those methods use $\ell^{1}$-norm or $\ell^{2}$-norm with thresholding to
impose the sparsity of the constructed sparse similarity graph, and certain
assumptions, e.g. independence or disjointness, on the subspaces are required
to obtain the subspace-sparse representation, which is the key to their
success. Such assumptions are not guaranteed to hold in practice and they limit
the application of sparse subspace clustering on subspaces with general
location. In this paper, we propose a new sparse subspace clustering method
named $\ell^{0}$-graph. In contrast to the required assumptions on subspaces
for most existing sparse subspace clustering methods, it is proved that
subspace-sparse representation can be obtained by $\ell^{0}$-graph for
arbitrary distinct underlying subspaces almost surely under the mild i.i.d.
assumption on the data generation. We develop a proximal method to obtain the
sub-optimal solution to the optimization problem of $\ell^{0}$-graph with
proved guarantee of convergence. Moreover, we propose a regularized
$\ell^{0}$-graph that encourages nearby data to have similar neighbors so that
the similarity graph is more aligned within each cluster and the graph
connectivity issue is alleviated. Extensive experimental results on various
data sets demonstrate the superiority of $\ell^{0}$-graph compared to other
competing clustering methods, as well as the effectiveness of regularized
$\ell^{0}$-graph.
| Yingzhen Yang, Jiashi Feng, Jianchao Yang, Thomas S. Huang | null | 1510.08520 | null | null |
The Singular Value Decomposition, Applications and Beyond | cs.LG | The singular value decomposition (SVD) is not only a classical theory in
matrix computation and analysis, but also is a powerful tool in machine
learning and modern data analysis. In this tutorial we first study the basic
notion of SVD and then show the central role of SVD in matrices. Using
majorization theory, we consider variational principles of singular values and
eigenvalues. Built on SVD and a theory of symmetric gauge functions, we discuss
unitarily invariant norms, which are then used to formulate general results for
matrix low rank approximation. We study the subdifferentials of unitarily
invariant norms. These results would be potentially useful in many machine
learning problems such as matrix completion and matrix data classification.
Finally, we discuss matrix low rank approximation and its recent developments
such as randomized SVD, approximate matrix multiplication, CUR decomposition,
and Nystrom approximation. Randomized algorithms are important approaches to
large scale SVD as well as fast matrix computations.
| Zhihua Zhang | null | 1510.08532 | null | null |
Attention with Intention for a Neural Network Conversation Model | cs.NE cs.AI cs.HC cs.LG | In a conversation or a dialogue process, attention and intention play
intrinsic roles. This paper proposes a neural network based approach that
models the attention and intention processes. It essentially consists of three
recurrent networks. The encoder network is a word-level model representing
source side sentences. The intention network is a recurrent network that models
the dynamics of the intention process. The decoder network is a recurrent
network produces responses to the input from the source side. It is a language
model that is dependent on the intention and has an attention mechanism to
attend to particular source side words, when predicting a symbol in the
response. The model is trained end-to-end without labeling data. Experiments
show that this model generates natural responses to user inputs.
| Kaisheng Yao and Geoffrey Zweig and Baolin Peng | null | 1510.08565 | null | null |
WarpLDA: a Cache Efficient O(1) Algorithm for Latent Dirichlet
Allocation | stat.ML cs.DC cs.IR cs.LG | Developing efficient and scalable algorithms for Latent Dirichlet Allocation
(LDA) is of wide interest for many applications. Previous work has developed an
O(1) Metropolis-Hastings sampling method for each token. However, the
performance is far from being optimal due to random accesses to the parameter
matrices and frequent cache misses.
In this paper, we first carefully analyze the memory access efficiency of
existing algorithms for LDA by the scope of random access, which is the size of
the memory region in which random accesses fall, within a short period of time.
We then develop WarpLDA, an LDA sampler which achieves both the best O(1) time
complexity per token and the best O(K) scope of random access. Our empirical
results in a wide range of testing conditions demonstrate that WarpLDA is
consistently 5-15x faster than the state-of-the-art Metropolis-Hastings based
LightLDA, and is comparable or faster than the sparsity aware F+LDA. With
WarpLDA, users can learn up to one million topics from hundreds of millions of
documents in a few hours, at an unprecedentedly throughput of 11G tokens per
second.
| Jianfei Chen, Kaiwei Li, Jun Zhu, Wenguang Chen | null | 1510.08628 | null | null |
RATM: Recurrent Attentive Tracking Model | cs.LG | We present an attention-based modular neural framework for computer vision.
The framework uses a soft attention mechanism allowing models to be trained
with gradient descent. It consists of three modules: a recurrent attention
module controlling where to look in an image or video frame, a
feature-extraction module providing a representation of what is seen, and an
objective module formalizing why the model learns its attentive behavior. The
attention module allows the model to focus computation on task-related
information in the input. We apply the framework to several object tracking
tasks and explore various design choices. We experiment with three data sets,
bouncing ball, moving digits and the real-world KTH data set. The proposed
Recurrent Attentive Tracking Model performs well on all three tasks and can
generalize to related but previously unseen sequences from a challenging
tracking data set.
| Samira Ebrahimi Kahou, Vincent Michalski, Roland Memisevic | null | 1510.08660 | null | null |
Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale
Bayesian Sampling | stat.ML cs.LG | Monte Carlo sampling for Bayesian posterior inference is a common approach
used in machine learning. The Markov Chain Monte Carlo procedures that are used
are often discrete-time analogues of associated stochastic differential
equations (SDEs). These SDEs are guaranteed to leave invariant the required
posterior distribution. An area of current research addresses the computational
benefits of stochastic gradient methods in this setting. Existing techniques
rely on estimating the variance or covariance of the subsampling error, and
typically assume constant variance. In this article, we propose a
covariance-controlled adaptive Langevin thermostat that can effectively
dissipate parameter-dependent noise while maintaining a desired target
distribution. The proposed method achieves a substantial speedup over popular
alternative schemes for large-scale machine learning applications.
| Xiaocheng Shang, Zhanxing Zhu, Benedict Leimkuhler, Amos J. Storkey | null | 1510.08692 | null | null |
How good is good enough? Re-evaluating the bar for energy disaggregation | cs.LG | Since the early 1980s, the research community has developed ever more
sophisticated algorithms for the problem of energy disaggregation, but despite
decades of research, there is still a dearth of applications with demonstrated
value. In this work, we explore a question that is highly pertinent to this
research community: how good does energy disaggregation need to be in order to
infer characteristics of a household? We present novel techniques that use
unsupervised energy disaggregation to predict both household occupancy and
static properties of the household such as size of the home and number of
occupants. Results show that basic disaggregation approaches performs up to 30%
better at occupancy estimation than using aggregate power data alone, and are
up to 10% better at estimating static household characteristics. These results
show that even rudimentary energy disaggregation techniques are sufficient for
improved inference of household characteristics. To conclude, we re-evaluate
the bar set by the community for energy disaggregation accuracy and try to
answer the question "how good is good enough?"
| Nipun Batra and Rishi Baijal and Amarjeet Singh and Kamin Whitehouse | null | 1510.08713 | null | null |
Spiking Deep Networks with LIF Neurons | cs.LG cs.NE | We train spiking deep networks using leaky integrate-and-fire (LIF) neurons,
and achieve state-of-the-art results for spiking networks on the CIFAR-10 and
MNIST datasets. This demonstrates that biologically-plausible spiking LIF
neurons can be integrated into deep networks can perform as well as other
spiking models (e.g. integrate-and-fire). We achieved this result by softening
the LIF response function, such that its derivative remains bounded, and by
training the network with noise to provide robustness against the variability
introduced by spikes. Our method is general and could be applied to other
neuron types, including those used on modern neuromorphic hardware. Our work
brings more biological realism into modern image classification models, with
the hope that these models can inform how the brain performs this difficult
task. It also provides new methods for training deep networks to run on
neuromorphic hardware, with the aim of fast, power-efficient image
classification for robotics applications.
| Eric Hunsberger and Chris Eliasmith | null | 1510.08829 | null | null |
Mixed Robust/Average Submodular Partitioning: Fast Algorithms,
Guarantees, and Applications to Parallel Machine Learning and Multi-Label
Image Segmentation | cs.DS cs.DM cs.LG | We study two mixed robust/average-case submodular partitioning problems that
we collectively call Submodular Partitioning. These problems generalize both
purely robust instances of the problem (namely max-min submodular fair
allocation (SFA) and min-max submodular load balancing (SLB) and also
generalize average-case instances (that is the submodular welfare problem (SWP)
and submodular multiway partition (SMP). While the robust versions have been
studied in the theory community, existing work has focused on tight
approximation guarantees, and the resultant algorithms are not, in general,
scalable to very large real-world applications. This is in contrast to the
average case, where most of the algorithms are scalable. In the present paper,
we bridge this gap, by proposing several new algorithms (including those based
on greedy, majorization-minimization, minorization-maximization, and relaxation
algorithms) that not only scale to large sizes but that also achieve
theoretical approximation guarantees close to the state-of-the-art, and in some
cases achieve new tight bounds. We also provide new scalable algorithms that
apply to additive combinations of the robust and average-case extreme
objectives. We show that these problems have many applications in machine
learning (ML). This includes: 1) data partitioning and load balancing for
distributed machine algorithms on parallel machines; 2) data clustering; and 3)
multi-label image segmentation with (only) Boolean submodular functions via
pixel partitioning. We empirically demonstrate the efficacy of our algorithms
on real-world problems involving data partitioning for distributed optimization
of standard machine learning objectives (including both convex and deep neural
network objectives), and also on purely unsupervised (i.e., no supervised or
semi-supervised learning, and no interactive segmentation) image segmentation.
| Kai Wei, Rishabh Iyer, Shengjie Wang, Wenruo Bai, Jeff Bilmes | null | 1510.08865 | null | null |
Robust Shift-and-Invert Preconditioning: Faster and More Sample
Efficient Algorithms for Eigenvector Computation | cs.DS cs.LG math.NA math.OC | We provide faster algorithms and improved sample complexities for
approximating the top eigenvector of a matrix.
Offline Setting: Given an $n \times d$ matrix $A$, we show how to compute an
$\epsilon$ approximate top eigenvector in time $\tilde O ( [nnz(A) + \frac{d
\cdot sr(A)}{gap^2}]\cdot \log 1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4}
(d \cdot sr(A))^{1/4}}{\sqrt{gap}}]\cdot \log1/\epsilon )$. Here $sr(A)$ is the
stable rank and $gap$ is the multiplicative eigenvalue gap. By separating the
$gap$ dependence from $nnz(A)$ we improve on the classic power and Lanczos
methods. We also improve prior work using fast subspace embeddings and
stochastic optimization, giving significantly improved dependencies on $sr(A)$
and $\epsilon$. Our second running time improves this further when $nnz(A) \le
\frac{d\cdot sr(A)}{gap^2}$.
Online Setting: Given a distribution $D$ with covariance matrix $\Sigma$ and
a vector $x_0$ which is an $O(gap)$ approximate top eigenvector for $\Sigma$,
we show how to refine to an $\epsilon$ approximation using $\tilde
O(\frac{v(D)}{gap^2} + \frac{v(D)}{gap \cdot \epsilon})$ samples from $D$. Here
$v(D)$ is a natural variance measure. Combining our algorithm with previous
work to initialize $x_0$, we obtain a number of improved sample complexity and
runtime results. For general distributions, we achieve asymptotically optimal
accuracy as a function of sample size as the number of samples grows large.
Our results center around a robust analysis of the classic method of
shift-and-invert preconditioning to reduce eigenvector computation to
approximately solving a sequence of linear systems. We then apply fast SVRG
based approximate system solvers to achieve our claims. We believe our results
suggest the general effectiveness of shift-and-invert based approaches and
imply that further computational gains may be reaped in practice.
| Chi Jin and Sham M. Kakade and Cameron Musco and Praneeth Netrapalli
and Aaron Sidford | null | 1510.08896 | null | null |
Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning | stat.ML cs.AI cs.LG | Recently, there has been significant progress in understanding reinforcement
learning in discounted infinite-horizon Markov decision processes (MDPs) by
deriving tight sample complexity bounds. However, in many real-world
applications, an interactive learning agent operates for a fixed or bounded
period of time, for example tutoring students for exams or handling customer
service requests. Such scenarios can often be better treated as episodic
fixed-horizon MDPs, for which only looser bounds on the sample complexity
exist. A natural notion of sample complexity in this setting is the number of
episodes required to guarantee a certain performance with high probability (PAC
guarantee). In this paper, we derive an upper PAC bound $\tilde
O(\frac{|\mathcal S|^2 |\mathcal A| H^2}{\epsilon^2} \ln\frac 1 \delta)$ and a
lower PAC bound $\tilde \Omega(\frac{|\mathcal S| |\mathcal A| H^2}{\epsilon^2}
\ln \frac 1 {\delta + c})$ that match up to log-terms and an additional linear
dependency on the number of states $|\mathcal S|$. The lower bound is the first
of its kind for this setting. Our upper bound leverages Bernstein's inequality
to improve on previous bounds for episodic finite-horizon MDPs which have a
time-horizon dependency of at least $H^3$.
| Christoph Dann, Emma Brunskill | null | 1510.08906 | null | null |
Testing Visual Attention in Dynamic Environments | cs.LG | We investigate attention as the active pursuit of useful information. This
contrasts with attention as a mechanism for the attenuation of irrelevant
information. We also consider the role of short-term memory, whose use is
critical to any model incapable of simultaneously perceiving all information on
which its output depends. We present several simple synthetic tasks, which
become considerably more interesting when we impose strong constraints on how a
model can interact with its input, and on how long it can take to produce its
output. We develop a model with a different structure from those seen in
previous work, and we train it using stochastic variational inference with a
learned proposal distribution.
| Philip Bachman and David Krueger and Doina Precup | null | 1510.08949 | null | null |
Principal Differences Analysis: Interpretable Characterization of
Differences between Distributions | stat.ML cs.LG stat.ME | We introduce principal differences analysis (PDA) for analyzing differences
between high-dimensional distributions. The method operates by finding the
projection that maximizes the Wasserstein divergence between the resulting
univariate populations. Relying on the Cramer-Wold device, it requires no
assumptions about the form of the underlying distributions, nor the nature of
their inter-class differences. A sparse variant of the method is introduced to
identify features responsible for the differences. We provide algorithms for
both the original minimax formulation as well as its semidefinite relaxation.
In addition to deriving some convergence results, we illustrate how the
approach may be applied to identify differences between cell populations in the
somatosensory cortex and hippocampus as manifested by single cell RNA-seq. Our
broader framework extends beyond the specific choice of Wasserstein divergence.
| Jonas Mueller and Tommi Jaakkola | null | 1510.08956 | null | null |
Robust Subspace Clustering via Tighter Rank Approximation | cs.CV cs.AI cs.LG stat.ML | Matrix rank minimization problem is in general NP-hard. The nuclear norm is
used to substitute the rank function in many recent studies. Nevertheless, the
nuclear norm approximation adds all singular values together and the
approximation error may depend heavily on the magnitudes of singular values.
This might restrict its capability in dealing with many practical problems. In
this paper, an arctangent function is used as a tighter approximation to the
rank function. We use it on the challenging subspace clustering problem. For
this nonconvex minimization problem, we develop an effective optimization
procedure based on a type of augmented Lagrange multipliers (ALM) method.
Extensive experiments on face clustering and motion segmentation show that the
proposed method is effective for rank approximation.
| Zhao Kang, Chong Peng, Qiang Cheng | 10.1145/2806416.2806506 | 1510.08971 | null | null |
CONQUER: Confusion Queried Online Bandit Learning | cs.LG stat.ML | We present a new recommendation setting for picking out two items from a
given set to be highlighted to a user, based on contextual input. These two
items are presented to a user who chooses one of them, possibly stochastically,
with a bias that favours the item with the higher value. We propose a
second-order algorithm framework that members of it use uses relative
upper-confidence bounds to trade off exploration and exploitation, and some
explore via sampling. We analyze one algorithm in this framework in an
adversarial setting with only mild assumption on the data, and prove a regret
bound of $O(Q_T + \sqrt{TQ_T\log T} + \sqrt{T}\log T)$, where $T$ is the number
of rounds and $Q_T$ is the cumulative approximation error of item values using
a linear model. Experiments with product reviews from 33 domains show the
advantage of our methods over algorithms designed for related settings, and
that UCB based algorithms are inferior to greed or sampling based algorithms.
| Daniel Barsky and Koby Crammer | null | 1510.08974 | null | null |
Highway Long Short-Term Memory RNNs for Distant Speech Recognition | cs.NE cs.AI cs.CL cs.LG eess.AS | In this paper, we extend the deep long short-term memory (DLSTM) recurrent
neural networks by introducing gated direct connections between memory cells in
adjacent layers. These direct links, called highway connections, enable
unimpeded information flow across different layers and thus alleviate the
gradient vanishing problem when building deeper LSTMs. We further introduce the
latency-controlled bidirectional LSTMs (BLSTMs) which can exploit the whole
history while keeping the latency under control. Efficient algorithms are
proposed to train these novel networks using both frame and sequence
discriminative criteria. Experiments on the AMI distant speech recognition
(DSR) task indicate that we can train deeper LSTMs and achieve better
improvement from sequence training with highway LSTMs (HLSTMs). Our novel model
obtains $43.9/47.7\%$ WER on AMI (SDM) dev and eval sets, outperforming all
previous works. It beats the strong DNN and DLSTM baselines with $15.7\%$ and
$5.3\%$ relative improvement respectively.
| Yu Zhang and Guoguo Chen and Dong Yu and Kaisheng Yao and Sanjeev
Khudanpur and James Glass | null | 1510.08983 | null | null |
Prediction-Adaptation-Correction Recurrent Neural Networks for
Low-Resource Language Speech Recognition | cs.CL cs.LG cs.NE eess.AS | In this paper, we investigate the use of prediction-adaptation-correction
recurrent neural networks (PAC-RNNs) for low-resource speech recognition. A
PAC-RNN is comprised of a pair of neural networks in which a {\it correction}
network uses auxiliary information given by a {\it prediction} network to help
estimate the state probability. The information from the correction network is
also used by the prediction network in a recurrent loop. Our model outperforms
other state-of-the-art neural networks (DNNs, LSTMs) on IARPA-Babel tasks.
Moreover, transfer learning from a language that is similar to the target
language can help improve performance further.
| Yu Zhang, Ekapol Chuangsuwanich, James Glass, Dong Yu | null | 1510.08985 | null | null |
Subsampling in Smoothed Range Spaces | cs.CG cs.LG | We consider smoothed versions of geometric range spaces, so an element of the
ground set (e.g. a point) can be contained in a range with a non-binary value
in $[0,1]$. Similar notions have been considered for kernels; we extend them to
more general types of ranges. We then consider approximations of these range
spaces through $\varepsilon $-nets and $\varepsilon $-samples (aka
$\varepsilon$-approximations). We characterize when size bounds for
$\varepsilon $-samples on kernels can be extended to these more general
smoothed range spaces. We also describe new generalizations for $\varepsilon
$-nets to these range spaces and show when results from binary range spaces can
carry over to these smoothed ones.
| Jeff M. Phillips and Yan Zheng | null | 1510.09123 | null | null |
Learning Continuous Control Policies by Stochastic Value Gradients | cs.LG cs.NE | We present a unified framework for learning continuous control policies using
backpropagation. It supports stochastic control by treating stochasticity in
the Bellman equation as a deterministic function of exogenous noise. The
product is a spectrum of general policy gradient algorithms that range from
model-free methods with value functions to model-based methods without value
functions. We use learned models but only require observations from the
environment in- stead of observations from model-predicted trajectories,
minimizing the impact of compounded model errors. We apply these algorithms
first to a toy stochastic control problem and then to several physics-based
control problems in simulation. One of these variants, SVG(1), shows the
effectiveness of learning models, value functions, and policies simultaneously
in continuous domains.
| Nicolas Heess, Greg Wayne, David Silver, Timothy Lillicrap, Yuval
Tassa, and Tom Erez | null | 1510.09142 | null | null |
Streaming, Distributed Variational Inference for Bayesian Nonparametrics | cs.LG stat.ML | This paper presents a methodology for creating streaming, distributed
inference algorithms for Bayesian nonparametric (BNP) models. In the proposed
framework, processing nodes receive a sequence of data minibatches, compute a
variational posterior for each, and make asynchronous streaming updates to a
central model. In contrast to previous algorithms, the proposed framework is
truly streaming, distributed, asynchronous, learning-rate-free, and
truncation-free. The key challenge in developing the framework, arising from
the fact that BNP models do not impose an inherent ordering on their
components, is finding the correspondence between minibatch and central BNP
posterior components before performing each update. To address this, the paper
develops a combinatorial optimization problem over component correspondences,
and provides an efficient solution technique. The paper concludes with an
application of the methodology to the DP mixture model, with experimental
results demonstrating its practical scalability and performance.
| Trevor Campbell, Julian Straub, John W. Fisher III, Jonathan P. How | null | 1510.09161 | null | null |
Generating Text with Deep Reinforcement Learning | cs.CL cs.LG cs.NE | We introduce a novel schema for sequence to sequence learning with a Deep
Q-Network (DQN), which decodes the output sequence iteratively. The aim here is
to enable the decoder to first tackle easier portions of the sequences, and
then turn to cope with difficult parts. Specifically, in each iteration, an
encoder-decoder Long Short-Term Memory (LSTM) network is employed to, from the
input sequence, automatically create features to represent the internal states
of and formulate a list of potential actions for the DQN. Take rephrasing a
natural sentence as an example. This list can contain ranked potential words.
Next, the DQN learns to make decision on which action (e.g., word) will be
selected from the list to modify the current decoded sequence. The newly
modified output sequence is subsequently used as the input to the DQN for the
next decoding iteration. In each iteration, we also bias the reinforcement
learning's attention to explore sequence portions which are previously
difficult to be decoded. For evaluation, the proposed strategy was trained to
decode ten thousands natural sentences. Our experiments indicate that, when
compared to a left-to-right greedy beam search LSTM decoder, the proposed
method performed competitively well when decoding sentences from the training
set, but significantly outperformed the baseline when decoding unseen
sentences, in terms of BLEU score obtained.
| Hongyu Guo | null | 1510.09202 | null | null |
Learning Causal Graphs with Small Interventions | cs.AI cs.IT cs.LG math.IT stat.ML | We consider the problem of learning causal networks with interventions, when
each intervention is limited in size under Pearl's Structural Equation Model
with independent errors (SEM-IE). The objective is to minimize the number of
experiments to discover the causal directions of all the edges in a causal
graph. Previous work has focused on the use of separating systems for complete
graphs for this task. We prove that any deterministic adaptive algorithm needs
to be a separating system in order to learn complete graphs in the worst case.
In addition, we present a novel separating system construction, whose size is
close to optimal and is arguably simpler than previous work in combinatorics.
We also develop a novel information theoretic lower bound on the number of
interventions that applies in full generality, including for randomized
adaptive learning algorithms.
For general chordal graphs, we derive worst case lower bounds on the number
of interventions. Building on observations about induced trees, we give a new
deterministic adaptive algorithm to learn directions on any chordal skeleton
completely. In the worst case, our achievable scheme is an
$\alpha$-approximation algorithm where $\alpha$ is the independence number of
the graph. We also show that there exist graph classes for which the sufficient
number of experiments is close to the lower bound. In the other extreme, there
are graph classes for which the required number of experiments is
multiplicatively $\alpha$ away from our lower bound.
In simulations, our algorithm almost always performs very close to the lower
bound, while the approach based on separating systems for complete graphs is
significantly worse for random chordal graphs.
| Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis, Sriram
Vishwanath | null | 1511.00041 | null | null |
Learning Adversary Behavior in Security Games: A PAC Model Perspective | cs.AI cs.GT cs.LG | Recent applications of Stackelberg Security Games (SSG), from wildlife crime
to urban crime, have employed machine learning tools to learn and predict
adversary behavior using available data about defender-adversary interactions.
Given these recent developments, this paper commits to an approach of directly
learning the response function of the adversary. Using the PAC model, this
paper lays a firm theoretical foundation for learning in SSGs (e.g.,
theoretically answer questions about the numbers of samples required to learn
adversary behavior) and provides utility guarantees when the learned adversary
model is used to plan the defender's strategy. The paper also aims to answer
practical questions such as how much more data is needed to improve an
adversary model's accuracy. Additionally, we explain a recently observed
phenomenon that prediction accuracy of learned adversary behavior is not enough
to discover the utility maximizing defender strategy. We provide four main
contributions: (1) a PAC model of learning adversary response functions in
SSGs; (2) PAC-model analysis of the learning of key, existing bounded
rationality models in SSGs; (3) an entirely new approach to adversary modeling
based on a non-parametric class of response functions with PAC-model analysis
and (4) identification of conditions under which computing the best defender
strategy against the learned adversary behavior is indeed the optimal strategy.
Finally, we conduct experiments with real-world data from a national park in
Uganda, showing the benefit of our new adversary modeling approach and
verification of our PAC model predictions.
| Arunesh Sinha, Debarun Kar, Milind Tambe | null | 1511.00043 | null | null |
The Pareto Regret Frontier for Bandits | cs.LG | Given a multi-armed bandit problem it may be desirable to achieve a
smaller-than-usual worst-case regret for some special actions. I show that the
price for such unbalanced worst-case regret guarantees is rather high.
Specifically, if an algorithm enjoys a worst-case regret of B with respect to
some action, then there must exist another action for which the worst-case
regret is at least {\Omega}(nK/B), where n is the horizon and K the number of
actions. I also give upper bounds in both the stochastic and adversarial
settings showing that this result cannot be improved. For the stochastic case
the pareto regret frontier is characterised exactly up to constant factors.
| Tor Lattimore | null | 1511.00048 | null | null |
Gaussian Process Random Fields | cs.LG stat.ML | Gaussian processes have been successful in both supervised and unsupervised
machine learning tasks, but their computational complexity has constrained
practical applications. We introduce a new approximation for large-scale
Gaussian processes, the Gaussian Process Random Field (GPRF), in which local
GPs are coupled via pairwise potentials. The GPRF likelihood is a simple,
tractable, and parallelizeable approximation to the full GP marginal
likelihood, enabling latent variable modeling and hyperparameter selection on
large datasets. We demonstrate its effectiveness on synthetic spatial data as
well as a real-world application to seismic event location.
| David A. Moore and Stuart J. Russell | null | 1511.00054 | null | null |
Top-down Tree Long Short-Term Memory Networks | cs.CL cs.LG | Long Short-Term Memory (LSTM) networks, a type of recurrent neural network
with a more complex computational unit, have been successfully applied to a
variety of sequence modeling tasks. In this paper we develop Tree Long
Short-Term Memory (TreeLSTM), a neural network model based on LSTM, which is
designed to predict a tree rather than a linear sequence. TreeLSTM defines the
probability of a sentence by estimating the generation probability of its
dependency tree. At each time step, a node is generated based on the
representation of the generated sub-tree. We further enhance the modeling power
of TreeLSTM by explicitly representing the correlations between left and right
dependents. Application of our model to the MSR sentence completion challenge
achieves results beyond the current state of the art. We also report results on
dependency parsing reranking achieving competitive performance.
| Xingxing Zhang, Liang Lu, Mirella Lapata | null | 1511.00060 | null | null |
Faster Stochastic Variational Inference using Proximal-Gradient Methods
with General Divergence Functions | stat.ML cs.LG stat.CO | Several recent works have explored stochastic gradient methods for
variational inference that exploit the geometry of the variational-parameter
space. However, the theoretical properties of these methods are not
well-understood and these methods typically only apply to
conditionally-conjugate models. We present a new stochastic method for
variational inference which exploits the geometry of the variational-parameter
space and also yields simple closed-form updates even for non-conjugate models.
We also give a convergence-rate analysis of our method and many other previous
methods which exploit the geometry of the space. Our analysis generalizes
existing convergence results for stochastic mirror-descent on non-convex
objectives by using a more general class of divergence functions. Beyond giving
a theoretical justification for a variety of recent methods, our experiments
show that new algorithms derived in this framework lead to state of the art
results on a variety of problems. Further, due to its generality, we expect
that our theoretical analysis could also apply to other applications.
| Mohammad Emtiyaz Khan, Reza Babanezhad, Wu Lin, Mark Schmidt, Masashi
Sugiyama | null | 1511.00146 | null | null |
Preconditioned Data Sparsification for Big Data with Applications to PCA
and K-means | stat.ML cs.LG | We analyze a compression scheme for large data sets that randomly keeps a
small percentage of the components of each data sample. The benefit is that the
output is a sparse matrix and therefore subsequent processing, such as PCA or
K-means, is significantly faster, especially in a distributed-data setting.
Furthermore, the sampling is single-pass and applicable to streaming data. The
sampling mechanism is a variant of previous methods proposed in the literature
combined with a randomized preconditioning to smooth the data. We provide
guarantees for PCA in terms of the covariance matrix, and guarantees for
K-means in terms of the error in the center estimators at a given step. We
present numerical evidence to show both that our bounds are nearly tight and
that our algorithms provide a real benefit when applied to standard test data
sets, as well as providing certain benefits over related sampling approaches.
| Farhad Pourkamali-Anaraki and Stephen Becker | 10.1109/TIT.2017.2672725 | 1511.00152 | null | null |
Prediction of Dynamical time Series Using Kernel Based Regression and
Smooth Splines | stat.ML cs.LG | Prediction of dynamical time series with additive noise using support vector
machines or kernel based regression has been proved to be consistent for
certain classes of discrete dynamical systems. Consistency implies that these
methods are effective at computing the expected value of a point at a future
time given the present coordinates. However, the present coordinates themselves
are noisy, and therefore, these methods are not necessarily effective at
removing noise. In this article, we consider denoising and prediction as
separate problems for flows, as opposed to discrete time dynamical systems, and
show that the use of smooth splines is more effective at removing noise.
Combination of smooth splines and kernel based regression yields predictors
that are more accurate on benchmarks typically by a factor of 2 or more. We
prove that kernel based regression in combination with smooth splines converges
to the exact predictor for time series extracted from any compact invariant set
of any sufficiently smooth flow. As a consequence of convergence, one can find
examples where the combination of kernel based regression with smooth splines
is superior by even a factor of $100$. The predictors that we compute operate
on delay coordinate data and not the full state vector, which is typically not
observable.
| Raymundo Navarrete and Divakar Viswanath | null | 1511.00158 | null | null |
Large-scale probabilistic predictors with and without guarantees of
validity | cs.LG | This paper studies theoretically and empirically a method of turning
machine-learning algorithms into probabilistic predictors that automatically
enjoys a property of validity (perfect calibration) and is computationally
efficient. The price to pay for perfect calibration is that these probabilistic
predictors produce imprecise (in practice, almost precise for large data sets)
probabilities. When these imprecise probabilities are merged into precise
probabilities, the resulting predictors, while losing the theoretical property
of perfect calibration, are consistently more accurate than the existing
methods in empirical studies.
| Vladimir Vovk, Ivan Petej, and Valentina Fedorova | null | 1511.00213 | null | null |
Stochastic Top-k ListNet | cs.IR cs.LG | ListNet is a well-known listwise learning to rank model and has gained much
attention in recent years. A particular problem of ListNet, however, is the
high computation complexity in model training, mainly due to the large number
of object permutations involved in computing the gradients. This paper proposes
a stochastic ListNet approach which computes the gradient within a bounded
permutation subset. It significantly reduces the computation complexity of
model training and allows extension to Top-k models, which is impossible with
the conventional implementation based on full-set permutations. Meanwhile, the
new approach utilizes partial ranking information of human labels, which helps
improve model quality. Our experiments demonstrated that the stochastic ListNet
method indeed leads to better ranking performance and speeds up the model
training remarkably.
| Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan | null | 1511.00271 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.