title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning dynamic Boltzmann machines with spike-timing dependent
plasticity | cs.NE cs.AI cs.LG stat.ML | We propose a particularly structured Boltzmann machine, which we refer to as
a dynamic Boltzmann machine (DyBM), as a stochastic model of a
multi-dimensional time-series. The DyBM can have infinitely many layers of
units but allows exact and efficient inference and learning when its parameters
have a proposed structure. This proposed structure is motivated by postulates
and observations, from biological neural networks, that the synaptic weight is
strengthened or weakened, depending on the timing of spikes (i.e., spike-timing
dependent plasticity or STDP). We show that the learning rule of updating the
parameters of the DyBM in the direction of maximizing the likelihood of given
time-series can be interpreted as STDP with long term potentiation and long
term depression. The learning rule has a guarantee of convergence and can be
performed in a distributed matter (i.e., local in space) with limited memory
(i.e., local in time).
| Takayuki Osogami and Makoto Otsuka | null | 1509.08634 | null | null |
Variational Information Maximisation for Intrinsically Motivated
Reinforcement Learning | stat.ML cs.AI cs.LG | The mutual information is a core statistical quantity that has applications
in all areas of machine learning, whether this is in training of density models
over multiple data modalities, in maximising the efficiency of noisy
transmission channels, or when learning behaviour policies for exploration by
artificial agents. Most learning algorithms that involve optimisation of the
mutual information rely on the Blahut-Arimoto algorithm --- an enumerative
algorithm with exponential complexity that is not suitable for modern machine
learning applications. This paper provides a new approach for scalable
optimisation of the mutual information by merging techniques from variational
inference and deep learning. We develop our approach by focusing on the problem
of intrinsically-motivated learning, where the mutual information forms the
definition of a well-known internal drive known as empowerment. Using a
variational lower bound on the mutual information, combined with convolutional
networks for handling visual input streams, we develop a stochastic
optimisation algorithm that allows for scalable information maximisation and
empowerment-based reasoning directly from pixels to actions.
| Shakir Mohamed and Danilo Jimenez Rezende | null | 1509.08731 | null | null |
Compression of Deep Neural Networks on the Fly | cs.LG cs.CV cs.NE | Thanks to their state-of-the-art performance, deep neural networks are
increasingly used for object recognition. To achieve these results, they use
millions of parameters to be trained. However, when targeting embedded
applications the size of these models becomes problematic. As a consequence,
their usage on smartphones or other resource limited devices is prohibited. In
this paper we introduce a novel compression method for deep neural networks
that is performed during the learning phase. It consists in adding an extra
regularization term to the cost function of fully-connected layers. We combine
this method with Product Quantization (PQ) of the trained weights for higher
savings in storage consumption. We evaluate our method on two data sets (MNIST
and CIFAR10), on which we achieve significantly larger compression rates than
state-of-the-art methods.
| Guillaume Souli\'e, Vincent Gripon, Ma\"elys Robert | null | 1509.08745 | null | null |
How to Formulate and Solve Statistical Recognition and Learning Problems | cs.LG | We formulate problems of statistical recognition and learning in a common
framework of complex hypothesis testing. Based on arguments from multi-criteria
optimization, we identify strategies that are improper for solving these
problems and derive a common form of the remaining strategies. We show that
some widely used approaches to recognition and learning are improper in this
sense. We then propose a generalized formulation of the recognition and
learning problem which embraces the whole range of sizes of the learning
sample, including the zero size. Learning becomes a special case of recognition
without learning. We define the concept of closest to optimal strategy, being a
solution to the formulated problem, and describe a technique for finding such a
strategy. On several illustrative cases, the strategy is shown to be superior
to the widely used learning methods based on maximal likelihood estimation.
| Michail Schlesinger and Evgeniy Vodolazskiy | null | 1509.08830 | null | null |
Foundations of Coupled Nonlinear Dimensionality Reduction | stat.ML cs.LG | In this paper we introduce and analyze the learning scenario of \emph{coupled
nonlinear dimensionality reduction}, which combines two major steps of machine
learning pipeline: projection onto a manifold and subsequent supervised
learning. First, we present new generalization bounds for this scenario and,
second, we introduce an algorithm that follows from these bounds. The
generalization error bound is based on a careful analysis of the empirical
Rademacher complexity of the relevant hypothesis set. In particular, we show an
upper bound on the Rademacher complexity that is in $\widetilde
O(\sqrt{\Lambda_{(r)}/m})$, where $m$ is the sample size and $\Lambda_{(r)}$
the upper bound on the Ky-Fan $r$-norm of the associated kernel matrix. We give
both upper and lower bound guarantees in terms of that Ky-Fan $r$-norm, which
strongly justifies the definition of our hypothesis set. To the best of our
knowledge, these are the first learning guarantees for the problem of coupled
dimensionality reduction. Our analysis and learning guarantees further apply to
several special cases, such as that of using a fixed kernel with supervised
dimensionality reduction or that of unsupervised learning of a kernel for
dimensionality reduction followed by a supervised learning algorithm. Based on
theoretical analysis, we suggest a structural risk minimization algorithm
consisting of the coupled fitting of a low dimensional manifold and a
separation function on that manifold.
| Mehryar Mohri, Afshin Rostamizadeh, Dmitry Storcheus | null | 1509.08880 | null | null |
A Semi-Supervised Method for Predicting Cancer Survival Using Incomplete
Clinical Data | cs.LG | Prediction of survival for cancer patients is an open area of research.
However, many of these studies focus on datasets with a large number of
patients. We present a novel method that is specifically designed to address
the challenge of data scarcity, which is often the case for cancer datasets.
Our method is able to use unlabeled data to improve classification by adopting
a semi-supervised training approach to learn an ensemble classifier. The
results of applying our method to three cancer datasets show the promise of
semi-supervised learning for prediction of cancer survival.
| Hamid Reza Hassanzadeh and John H. Phan and May D. Wang | null | 1509.08888 | null | null |
Generalizing Pooling Functions in Convolutional Neural Networks: Mixed,
Gated, and Tree | stat.ML cs.LG cs.NE | We seek to improve deep neural networks by generalizing the pooling
operations that play a central role in current architectures. We pursue a
careful exploration of approaches to allow pooling to learn and to adapt to
complex and variable patterns. The two primary directions lie in (1) learning a
pooling function via (two strategies of) combining of max and average pooling,
and (2) learning a pooling function in the form of a tree-structured fusion of
pooling filters that are themselves learned. In our experiments every
generalized pooling operation we explore improves performance when used in
place of average or max pooling. We experimentally demonstrate that the
proposed pooling operations provide a boost in invariance properties relative
to conventional pooling and set the state of the art on several widely adopted
benchmark datasets; they are also easy to implement, and can be applied within
various deep neural network architectures. These benefits come with only a
light increase in computational overhead during training and a very modest
increase in the number of model parameters.
| Chen-Yu Lee, Patrick W. Gallagher, Zhuowen Tu | null | 1509.08985 | null | null |
Learning without Recall: A Case for Log-Linear Learning | cs.SI cs.LG cs.SY math.OC stat.ML | We analyze a model of learning and belief formation in networks in which
agents follow Bayes rule yet they do not recall their history of past
observations and cannot reason about how other agents' beliefs are formed. They
do so by making rational inferences about their observations which include a
sequence of independent and identically distributed private signals as well as
the beliefs of their neighboring agents at each time. Fully rational agents
would successively apply Bayes rule to the entire history of observations. This
leads to forebodingly complex inferences due to lack of knowledge about the
global network structure that causes those observations. To address these
complexities, we consider a Learning without Recall model, which in addition to
providing a tractable framework for analyzing the behavior of rational agents
in social networks, can also provide a behavioral foundation for the variety of
non-Bayesian update rules in the literature. We present the implications of
various choices for time-varying priors of such agents and how this choice
affects learning and its rate.
| Mohammad Amin Rahimian and Ali Jadbabaie | null | 1509.08990 | null | null |
Maximum Likelihood Learning With Arbitrary Treewidth via Fast-Mixing
Parameter Sets | cs.LG stat.ML | Inference is typically intractable in high-treewidth undirected graphical
models, making maximum likelihood learning a challenge. One way to overcome
this is to restrict parameters to a tractable set, most typically the set of
tree-structured parameters. This paper explores an alternative notion of a
tractable set, namely a set of "fast-mixing parameters" where Markov chain
Monte Carlo (MCMC) inference can be guaranteed to quickly converge to the
stationary distribution. While it is common in practice to approximate the
likelihood gradient using samples obtained from MCMC, such procedures lack
theoretical guarantees. This paper proves that for any exponential family with
bounded sufficient statistics, (not just graphical models) when parameters are
constrained to a fast-mixing set, gradient descent with gradients approximated
by sampling will approximate the maximum likelihood solution inside the set
with high-probability. When unregularized, to find a solution epsilon-accurate
in log-likelihood requires a total amount of effort cubic in 1/epsilon,
disregarding logarithmic factors. When ridge-regularized, strong convexity
allows a solution epsilon-accurate in parameter distance with effort quadratic
in 1/epsilon. Both of these provide of a fully-polynomial time randomized
approximation scheme.
| Justin Domke | null | 1509.08992 | null | null |
Convergence of Stochastic Gradient Descent for PCA | cs.LG math.OC stat.ML | We consider the problem of principal component analysis (PCA) in a streaming
stochastic setting, where our goal is to find a direction of approximate
maximal variance, based on a stream of i.i.d. data points in $\reals^d$. A
simple and computationally cheap algorithm for this is stochastic gradient
descent (SGD), which incrementally updates its estimate based on each new data
point. However, due to the non-convex nature of the problem, analyzing its
performance has been a challenge. In particular, existing guarantees rely on a
non-trivial eigengap assumption on the covariance matrix, which is intuitively
unnecessary. In this paper, we provide (to the best of our knowledge) the first
eigengap-free convergence guarantees for SGD in the context of PCA. This also
partially resolves an open problem posed in \cite{hardt2014noisy}. Moreover,
under an eigengap assumption, we show that the same techniques lead to new SGD
convergence guarantees with better dependence on the eigengap.
| Ohad Shamir | null | 1509.09002 | null | null |
Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial
Monitoring | stat.ML cs.LG | Partial monitoring is a general model for sequential learning with limited
feedback formalized as a game between two players. In this game, the learner
chooses an action and at the same time the opponent chooses an outcome, then
the learner suffers a loss and receives a feedback signal. The goal of the
learner is to minimize the total loss. In this paper, we study partial
monitoring with finite actions and stochastic outcomes. We derive a logarithmic
distribution-dependent regret lower bound that defines the hardness of the
problem. Inspired by the DMED algorithm (Honda and Takemura, 2010) for the
multi-armed bandit problem, we propose PM-DMED, an algorithm that minimizes the
distribution-dependent regret. PM-DMED significantly outperforms
state-of-the-art algorithms in numerical experiments. To show the optimality of
PM-DMED with respect to the regret bound, we slightly modify the algorithm by
introducing a hinge function (PM-DMED-Hinge). Then, we derive an asymptotically
optimal regret upper bound of PM-DMED-Hinge that matches the lower bound.
| Junpei Komiyama, Junya Honda, Hiroshi Nakagawa | null | 1509.09011 | null | null |
Distributed Weighted Parameter Averaging for SVM Training on Big Data | cs.LG | Two popular approaches for distributed training of SVMs on big data are
parameter averaging and ADMM. Parameter averaging is efficient but suffers from
loss of accuracy with increase in number of partitions, while ADMM in the
feature space is accurate but suffers from slow convergence. In this paper, we
report a hybrid approach called weighted parameter averaging (WPA), which
optimizes the regularized hinge loss with respect to weights on parameters. The
problem is shown to be same as solving SVM in a projected space. We also
demonstrate an $O(\frac{1}{N})$ stability bound on final hypothesis given by
WPA, using novel proof techniques. Experimental results on a variety of toy and
real world datasets show that our approach is significantly more accurate than
parameter averaging for high number of partitions. It is also seen the proposed
method enjoys much faster convergence compared to ADMM in features space.
| Ayan Das and Sourangshu Bhattacharya | null | 1509.09030 | null | null |
Learning From Missing Data Using Selection Bias in Movie Recommendation | stat.ML cs.IR cs.LG cs.SI | Recommending items to users is a challenging task due to the large amount of
missing information. In many cases, the data solely consist of ratings or tags
voluntarily contributed by each user on a very limited subset of the available
items, so that most of the data of potential interest is actually missing.
Current approaches to recommendation usually assume that the unobserved data is
missing at random. In this contribution, we provide statistical evidence that
existing movie recommendation datasets reveal a significant positive
association between the rating of items and the propensity to select these
items. We propose a computationally efficient variational approach that makes
it possible to exploit this selection bias so as to improve the estimation of
ratings from small populations of users. Results obtained with this approach
applied to neighborhood-based collaborative filtering illustrate its potential
for improving the reliability of the recommendation.
| Claire Vernade (LTCI), Olivier Capp\'e (LTCI) | null | 1509.09130 | null | null |
Deep Haar Scattering Networks | cs.LG | An orthogonal Haar scattering transform is a deep network, computed with a
hierarchy of additions, subtractions and absolute values, over pairs of
coefficients. It provides a simple mathematical model for unsupervised deep
network learning. It implements non-linear contractions, which are optimized
for classification, with an unsupervised pair matching algorithm, of polynomial
complexity. A structured Haar scattering over graph data computes permutation
invariant representations of groups of connected points in the graph. If the
graph connectivity is unknown, unsupervised Haar pair learning can provide a
consistent estimation of connected dyadic groups of points. Classification
results are given on image data bases, defined on regular grids or graphs, with
a connectivity which may be known or unknown.
| Xiuyuan Cheng, Xu Chen, Stephane Mallat | null | 1509.09187 | null | null |
On the Complexity of Robust PCA and $\ell_1$-norm Low-Rank Matrix
Approximation | cs.LG cs.CC math.NA math.OC | The low-rank matrix approximation problem with respect to the component-wise
$\ell_1$-norm ($\ell_1$-LRA), which is closely related to robust principal
component analysis (PCA), has become a very popular tool in data mining and
machine learning. Robust PCA aims at recovering a low-rank matrix that was
perturbed with sparse noise, with applications for example in
foreground-background video separation. Although $\ell_1$-LRA is strongly
believed to be NP-hard, there is, to the best of our knowledge, no formal proof
of this fact. In this paper, we prove that $\ell_1$-LRA is NP-hard, already in
the rank-one case, using a reduction from MAX CUT. Our derivations draw
interesting connections between $\ell_1$-LRA and several other well-known
problems, namely, robust PCA, $\ell_0$-LRA, binary matrix factorization, a
particular densest bipartite subgraph problem, the computation of the cut norm
of $\{-1,+1\}$ matrices, and the discrete basis problem, which we all prove to
be NP-hard.
| Nicolas Gillis, Stephen A. Vavasis | 10.1287/moor.2017.0895 | 1509.09236 | null | null |
Convolutional Networks on Graphs for Learning Molecular Fingerprints | cs.LG cs.NE stat.ML | We introduce a convolutional neural network that operates directly on graphs.
These networks allow end-to-end learning of prediction pipelines whose inputs
are graphs of arbitrary size and shape. The architecture we present generalizes
standard molecular feature extraction methods based on circular fingerprints.
We show that these data-driven features are more interpretable, and have better
predictive performance on a variety of tasks.
| David Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael
G\'omez-Bombarelli, Timothy Hirzel, Al\'an Aspuru-Guzik, Ryan P. Adams | null | 1509.09292 | null | null |
Fast Algorithms for Convolutional Neural Networks | cs.NE cs.LG | Deep convolutional neural networks take GPU days of compute time to train on
large data sets. Pedestrian detection for self driving cars requires very low
latency. Image recognition for mobile phones is constrained by limited
processing resources. The success of convolutional neural networks in these
situations is limited by how fast we can compute them. Conventional FFT based
convolution is fast for large filters, but state of the art convolutional
neural networks use small, 3x3 filters. We introduce a new class of fast
algorithms for convolutional neural networks using Winograd's minimal filtering
algorithms. The algorithms compute minimal complexity convolution over small
tiles, which makes them fast with small filters and small batch sizes. We
benchmark a GPU implementation of our algorithm with the VGG network and show
state of the art throughput at batch sizes from 1 to 64.
| Andrew Lavin and Scott Gray | null | 1509.09308 | null | null |
Fast Discrete Distribution Clustering Using Wasserstein Barycenter with
Sparse Support | stat.CO cs.LG stat.ML | In a variety of research areas, the weighted bag of vectors and the histogram
are widely used descriptors for complex objects. Both can be expressed as
discrete distributions. D2-clustering pursues the minimum total within-cluster
variation for a set of discrete distributions subject to the
Kantorovich-Wasserstein metric. D2-clustering has a severe scalability issue,
the bottleneck being the computation of a centroid distribution, called
Wasserstein barycenter, that minimizes its sum of squared distances to the
cluster members. In this paper, we develop a modified Bregman ADMM approach for
computing the approximate discrete Wasserstein barycenter of large clusters. In
the case when the support points of the barycenters are unknown and have low
cardinality, our method achieves high accuracy empirically at a much reduced
computational cost. The strengths and weaknesses of our method and its
alternatives are examined through experiments, and we recommend scenarios for
their respective usage. Moreover, we develop both serial and parallelized
versions of the algorithm. By experimenting with large-scale data, we
demonstrate the computational efficiency of the new methods and investigate
their convergence properties and numerical stability. The clustering results
obtained on several datasets in different domains are highly competitive in
comparison with some widely used methods in the corresponding areas.
| Jianbo Ye, Panruo Wu, James Z. Wang and Jia Li | null | 1510.00012 | null | null |
Clamping Improves TRW and Mean Field Approximations | cs.LG cs.AI stat.ML | We examine the effect of clamping variables for approximate inference in
undirected graphical models with pairwise relationships and discrete variables.
For any number of variable labels, we demonstrate that clamping and summing
approximate sub-partition functions can lead only to a decrease in the
partition function estimate for TRW, and an increase for the naive mean field
method, in each case guaranteeing an improvement in the approximation and
bound. We next focus on binary variables, add the Bethe approximation to
consideration and examine ways to choose good variables to clamp, introducing
new methods. We show the importance of identifying highly frustrated cycles,
and of checking the singleton entropy of a variable. We explore the value of
our methods by empirical analysis and draw lessons to guide practitioners.
| Adrian Weller and Justin Domke | null | 1510.00087 | null | null |
Supporting Regularized Logistic Regression Privately and Efficiently | cs.LG cs.CR q-bio.GN | As one of the most popular statistical and machine learning models, logistic
regression with regularization has found wide adoption in biomedicine, social
sciences, information technology, and so on. These domains often involve data
of human subjects that are contingent upon strict privacy regulations.
Increasing concerns over data privacy make it more and more difficult to
coordinate and conduct large-scale collaborative studies, which typically rely
on cross-institution data sharing and joint analysis. Our work here focuses on
safeguarding regularized logistic regression, a widely-used machine learning
model in various disciplines while at the same time has not been investigated
from a data security and privacy perspective. We consider a common use scenario
of multi-institution collaborative studies, such as in the form of research
consortia or networks as widely seen in genetics, epidemiology, social
sciences, etc. To make our privacy-enhancing solution practical, we demonstrate
a non-conventional and computationally efficient method leveraging distributing
computing and strong cryptography to provide comprehensive protection over
individual-level and summary data. Extensive empirical evaluation on several
studies validated the privacy guarantees, efficiency and scalability of our
proposal. We also discuss the practical implications of our solution for
large-scale studies and applications from various disciplines, including
genetic and biomedical studies, smart grid, network analysis, etc.
| Wenfa Li, Hongzhe Liu, Peng Yang, Wei Xie | 10.1371/journal.pone.0156479 | 1510.00095 | null | null |
Disk storage management for LHCb based on Data Popularity estimator | cs.DC cs.LG physics.data-an | This paper presents an algorithm providing recommendations for optimizing the
LHCb data storage. The LHCb data storage system is a hybrid system. All
datasets are kept as archives on magnetic tapes. The most popular datasets are
kept on disks. The algorithm takes the dataset usage history and metadata
(size, type, configuration etc.) to generate a recommendation report. This
article presents how we use machine learning algorithms to predict future data
popularity. Using these predictions it is possible to estimate which datasets
should be removed from disk. We use regression algorithms and time series
analysis to find the optimal number of replicas for datasets that are kept on
disk. Based on the data popularity and the number of replicas optimization, the
algorithm minimizes a loss function to find the optimal data distribution. The
loss function represents all requirements for data distribution in the data
storage system. We demonstrate how our algorithm helps to save disk space and
to reduce waiting times for jobs using this data.
| Mikhail Hushchyn, Philippe Charpentier, Andrey Ustyuzhanin | 10.1088/1742-6596/664/4/042026 | 1510.00132 | null | null |
A Generative Model of Words and Relationships from Multiple Sources | cs.CL cs.LG stat.ML | Neural language models are a powerful tool to embed words into semantic
vector spaces. However, learning such models generally relies on the
availability of abundant and diverse training examples. In highly specialised
domains this requirement may not be met due to difficulties in obtaining a
large corpus, or the limited range of expression in average use. Such domains
may encode prior knowledge about entities in a knowledge base or ontology. We
propose a generative model which integrates evidence from diverse data sources,
enabling the sharing of semantic information. We achieve this by generalising
the concept of co-occurrence from distributional semantics to include other
relationships between entities or words, which we model as affine
transformations on the embedding space. We demonstrate the effectiveness of
this approach by outperforming recent models on a link prediction task and
demonstrating its ability to profit from partially or fully unobserved data
training labels. We further demonstrate the usefulness of learning from
different data sources with overlapping vocabularies.
| Stephanie L. Hyland, Theofanis Karaletsos, Gunnar R\"atsch | null | 1510.00259 | null | null |
Optimal Binary Classifier Aggregation for General Losses | cs.LG stat.ML | We address the problem of aggregating an ensemble of predictors with known
loss bounds in a semi-supervised binary classification setting, to minimize
prediction loss incurred on the unlabeled data. We find the minimax optimal
predictions for a very general class of loss functions including all convex and
many non-convex losses, extending a recent analysis of the problem for
misclassification error. The result is a family of semi-supervised ensemble
aggregation algorithms which are as efficient as linear learning by convex
optimization, but are minimax optimal without any relaxations. Their decision
rules take a form familiar in decision theory -- applying sigmoid functions to
a notion of ensemble margin -- without the assumptions typically made in
margin-based learning.
| Akshay Balsubramani, Yoav Freund | null | 1510.00452 | null | null |
Multi-armed Bandits with Application to 5G Small Cells | cs.LG cs.DC cs.NI | Due to the pervasive demand for mobile services, next generation wireless
networks are expected to be able to deliver high date rates while wireless
resources become more and more scarce. This requires the next generation
wireless networks to move towards new networking paradigms that are able to
efficiently support resource-demanding applications such as personalized mobile
services. Examples of such paradigms foreseen for the emerging fifth generation
(5G) cellular networks include very densely deployed small cells and
device-to-device communications. For 5G networks, it will be imperative to
search for spectrum and energy-efficient solutions to the resource allocation
problems that i) are amenable to distributed implementation, ii) are capable of
dealing with uncertainty and lack of information, and iii) can cope with users'
selfishness. The core objective of this article is to investigate and to
establish the potential of multi-armed bandit (MAB) framework to address this
challenge. In particular, we provide a brief tutorial on bandit problems,
including different variations and solution approaches. Furthermore, we discuss
recent applications as well as future research directions. In addition, we
provide a detailed example of using an MAB model for energy-efficient small
cell planning in 5G networks.
| Setareh Maghsudi and Ekram Hossain | 10.1109/MWC.2016.7498076 | 1510.00627 | null | null |
Distributed Multitask Learning | stat.ML cs.LG | We consider the problem of distributed multi-task learning, where each
machine learns a separate, but related, task. Specifically, each machine learns
a linear predictor in high-dimensional space,where all tasks share the same
small support. We present a communication-efficient estimator based on the
debiased lasso and show that it is comparable with the optimal centralized
method.
| Jialei Wang, Mladen Kolar, Nathan Srebro | null | 1510.00633 | null | null |
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using
Hierarchy Width | cs.LG | Gibbs sampling on factor graphs is a widely used inference technique, which
often produces good empirical results. Theoretical guarantees for its
performance are weak: even for tree structured graphs, the mixing time of Gibbs
may be exponential in the number of variables. To help understand the behavior
of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy
width. We show that under suitable conditions on the weights, bounded hierarchy
width ensures polynomial mixing time. Our study of hierarchy width is in part
motivated by a class of factor graph templates, hierarchical templates, which
have bounded hierarchy width---regardless of the data used to instantiate them.
We demonstrate a rich application from natural language processing in which
Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human
volunteers.
| Christopher De Sa, Ce Zhang, Kunle Olukotun, Christopher R\'e | null | 1510.00756 | null | null |
A Survey of Online Experiment Design with the Stochastic Multi-Armed
Bandit | stat.ML cs.LG | Adaptive and sequential experiment design is a well-studied area in numerous
domains. We survey and synthesize the work of the online statistical learning
paradigm referred to as multi-armed bandits integrating the existing research
as a resource for a certain class of online experiments. We first explore the
traditional stochastic model of a multi-armed bandit, then explore a taxonomic
scheme of complications to that model, for each complication relating it to a
specific requirement or consideration of the experiment design context.
Finally, at the end of the paper, we present a table of known upper-bounds of
regret for all studied algorithms providing both perspectives for future
theoretical work and a decision-making tool for practitioners looking for
theoretical guarantees.
| Giuseppe Burtini, Jason Loeppky, Ramon Lawrence | null | 1510.00757 | null | null |
Machine Learning for Machine Data from a CATI Network | cs.LG | This is a machine learning application paper involving big data. We present
high-accuracy prediction methods of rare events in semi-structured machine log
files, which are produced at high velocity and high volume by NORC's
computer-assisted telephone interviewing (CATI) network for conducting surveys.
We judiciously apply natural language processing (NLP) techniques and
data-mining strategies to train effective learning and prediction models for
classifying uncommon error messages in the log---without access to source code,
updated documentation or dictionaries. In particular, our simple but effective
approach of features preallocation for learning from imbalanced data coupled
with naive Bayes classifiers can be conceivably generalized to supervised or
semi-supervised learning and prediction methods for other critical events such
as cyberattack detection.
| Sou-Cheng T. Choi | null | 1510.00772 | null | null |
Distributed Parameter Map-Reduce | cs.DC cs.LG stat.ML | This paper describes how to convert a machine learning problem into a series
of map-reduce tasks. We study logistic regression algorithm. In logistic
regression algorithm, it is assumed that samples are independent and each
sample is assigned a probability. Parameters are obtained by maxmizing the
product of all sample probabilities. Rapid expansion of training samples brings
challenges to machine learning method. Training samples are so many that they
can be only stored in distributed file system and driven by map-reduce style
programs. The main step of logistic regression is inference. According to
map-reduce spirit, each sample makes inference through a separate map
procedure. But the premise of inference is that the map procedure holds
parameters for all features in the sample. In this paper, we propose
Distributed Parameter Map-Reduce, in which not only samples, but also
parameters are distributed in nodes of distributed filesystem. Through a series
of map-reduce tasks, we assign each sample parameters for its features, make
inference for the sample and update paramters of the model. The above processes
are excuted looply until convergence. We test the proposed algorithm in actual
hadoop production environment. Experiments show that the acceleration of the
algorithm is in linear relationship with the number of cluster nodes.
| Qi Li | null | 1510.00817 | null | null |
Approximate Fisher Kernels of non-iid Image Models for Image
Categorization | cs.CV cs.LG | The bag-of-words (BoW) model treats images as sets of local descriptors and
represents them by visual word histograms. The Fisher vector (FV)
representation extends BoW, by considering the first and second order
statistics of local descriptors. In both representations local descriptors are
assumed to be identically and independently distributed (iid), which is a poor
assumption from a modeling perspective. It has been experimentally observed
that the performance of BoW and FV representations can be improved by employing
discounting transformations such as power normalization. In this paper, we
introduce non-iid models by treating the model parameters as latent variables
which are integrated out, rendering all local regions dependent. Using the
Fisher kernel principle we encode an image by the gradient of the data
log-likelihood w.r.t. the model hyper-parameters. Our models naturally generate
discounting effects in the representations; suggesting that such
transformations have proven successful because they closely correspond to the
representations obtained for non-iid models. To enable tractable computation,
we rely on variational free-energy bounds to learn the hyper-parameters and to
compute approximate Fisher kernels. Our experimental evaluation results
validate that our models lead to performance improvements comparable to using
power normalization, as employed in state-of-the-art feature aggregation
methods.
| Ramazan Gokberk Cinbis, Jakob Verbeek, Cordelia Schmid | 10.1109/TPAMI.2015.2484342 | 1510.00857 | null | null |
Client Profiling for an Anti-Money Laundering System | cs.LG cs.AI stat.ML | We present a data mining approach for profiling bank clients in order to
support the process of detection of anti-money laundering operations. We first
present the overall system architecture, and then focus on the relevant
component for this paper. We detail the experiments performed on real world
data from a financial institution, which allowed us to group clients in
clusters and then generate a set of classification rules. We discuss the
relevance of the founded client profiles and of the generated classification
rules. According to the defined overall agent-based architecture, these rules
will be incorporated in the knowledge base of the intelligent agents
responsible for the signaling of suspicious transactions.
| Claudio Alexandre and Jo\~ao Balsa | null | 1510.00878 | null | null |
Quadratic Optimization with Orthogonality Constraints: Explicit
Lojasiewicz Exponent and Linear Convergence of Line-Search Methods | math.OC cs.LG cs.NA math.NA | A fundamental class of matrix optimization problems that arise in many areas
of science and engineering is that of quadratic optimization with orthogonality
constraints. Such problems can be solved using line-search methods on the
Stiefel manifold, which are known to converge globally under mild conditions.
To determine the convergence rate of these methods, we give an explicit
estimate of the exponent in a Lojasiewicz inequality for the (non-convex) set
of critical points of the aforementioned class of problems. By combining such
an estimate with known arguments, we are able to establish the linear
convergence of a large class of line-search methods. A key step in our proof is
to establish a local error bound for the set of critical points, which may be
of independent interest.
| Huikang Liu and Weijie Wu and Anthony Man-Cho So | null | 1510.01025 | null | null |
Relaxed Multiple-Instance SVM with Application to Object Discovery | cs.CV cs.LG | Multiple-instance learning (MIL) has served as an important tool for a wide
range of vision applications, for instance, image classification, object
detection, and visual tracking. In this paper, we propose a novel method to
solve the classical MIL problem, named relaxed multiple-instance SVM (RMI-SVM).
We treat the positiveness of instance as a continuous variable, use Noisy-OR
model to enforce the MIL constraints, and jointly optimize the bag label and
instance label in a unified framework. The optimization problem can be
efficiently solved using stochastic gradient decent. The extensive experiments
demonstrate that RMI-SVM consistently achieves superior performance on various
benchmarks for MIL. Moreover, we simply applied RMI-SVM to a challenging vision
task, common object discovery. The state-of-the-art results of object discovery
on Pascal VOC datasets further confirm the advantages of the proposed method.
| Xinggang Wang, Zhuotun Zhu, Cong Yao, Xiang Bai | null | 1510.01027 | null | null |
Boosting in the presence of outliers: adaptive classification with
non-convex loss functions | stat.ML cs.AI cs.LG math.ST stat.ME stat.TH | This paper examines the role and efficiency of the non-convex loss functions
for binary classification problems. In particular, we investigate how to design
a simple and effective boosting algorithm that is robust to the outliers in the
data. The analysis of the role of a particular non-convex loss for prediction
accuracy varies depending on the diminishing tail properties of the gradient of
the loss -- the ability of the loss to efficiently adapt to the outlying data,
the local convex properties of the loss and the proportion of the contaminated
data. In order to use these properties efficiently, we propose a new family of
non-convex losses named $\gamma$-robust losses. Moreover, we present a new
boosting framework, {\it Arch Boost}, designed for augmenting the existing work
such that its corresponding classification algorithm is significantly more
adaptable to the unknown data contamination. Along with the Arch Boosting
framework, the non-convex losses lead to the new class of boosting algorithms,
named adaptive, robust, boosting (ARB). Furthermore, we present theoretical
examples that demonstrate the robustness properties of the proposed algorithms.
In particular, we develop a new breakdown point analysis and a new influence
function analysis that demonstrate gains in robustness. Moreover, we present
new theoretical results, based only on local curvatures, which may be used to
establish statistical and optimization properties of the proposed Arch boosting
algorithms with highly non-convex loss functions. Extensive numerical
calculations are used to illustrate these theoretical properties and reveal
advantages over the existing boosting methods when data exhibits a number of
outliers.
| Alexander Hanbo Li and Jelena Bradic | 10.1080/01621459.2016.1273116 | 1510.01064 | null | null |
On the Online Frank-Wolfe Algorithms for Convex and Non-convex
Optimizations | stat.ML cs.LG | In this paper, the online variants of the classical Frank-Wolfe algorithm are
considered. We consider minimizing the regret with a stochastic cost. The
online algorithms only require simple iterative updates and a non-adaptive step
size rule, in contrast to the hybrid schemes commonly considered in the
literature. Several new results are derived for convex and non-convex losses.
With a strongly convex stochastic cost and when the optimal solution lies in
the interior of the constraint set or the constraint set is a polytope, the
regret bound and anytime optimality are shown to be ${\cal O}( \log^3 T / T )$
and ${\cal O}( \log^2 T / T)$, respectively, where $T$ is the number of rounds
played. These results are based on an improved analysis on the stochastic
Frank-Wolfe algorithms. Moreover, the online algorithms are shown to converge
even when the loss is non-convex, i.e., the algorithms find a stationary point
to the time-varying/stochastic loss at a rate of ${\cal O}(\sqrt{1/T})$.
Numerical experiments on realistic data sets are presented to support our
theoretical claims.
| Jean Lafond, Hoi-To Wai, Eric Moulines | null | 1510.01171 | null | null |
Cross-Device Tracking: Matching Devices and Cookies | cs.LG cs.CY | The number of computers, tablets and smartphones is increasing rapidly, which
entails the ownership and use of multiple devices to perform online tasks. As
people move across devices to complete these tasks, their identities becomes
fragmented. Understanding the usage and transition between those devices is
essential to develop efficient applications in a multi-device world. In this
paper we present a solution to deal with the cross-device identification of
users based on semi-supervised machine learning methods to identify which
cookies belong to an individual using a device. The method proposed in this
paper scored third in the ICDM 2015 Drawbridge Cross-Device Connections
challenge proving its good performance.
| Roberto D\'iaz-Morales | 10.1109/ICDMW.2015.244 | 1510.01175 | null | null |
Bayesian Inference via Approximation of Log-likelihood for Priors in
Exponential Family | cs.LG stat.ML | In this paper, a Bayesian inference technique based on Taylor series
approximation of the logarithm of the likelihood function is presented. The
proposed approximation is devised for the case, where the prior distribution
belongs to the exponential family of distributions. The logarithm of the
likelihood function is linearized with respect to the sufficient statistic of
the prior distribution in exponential family such that the posterior obtains
the same exponential family form as the prior. Similarities between the
proposed method and the extended Kalman filter for nonlinear filtering are
illustrated. Furthermore, an extended target measurement update for target
models where the target extent is represented by a random matrix having an
inverse Wishart distribution is derived. The approximate update covers the
important case where the spread of measurement is due to the target extent as
well as the measurement noise in the sensor.
| Tohid Ardeshiri, Umut Orguner, and Fredrik Gustafsson | null | 1510.01225 | null | null |
Learning in Unlabeled Networks - An Active Learning and Inference
Approach | stat.ML cs.LG cs.SI | The task of determining labels of all network nodes based on the knowledge
about network structure and labels of some training subset of nodes is called
the within-network classification. It may happen that none of the labels of the
nodes is known and additionally there is no information about number of classes
to which nodes can be assigned. In such a case a subset of nodes has to be
selected for initial label acquisition. The question that arises is: "labels of
which nodes should be collected and used for learning in order to provide the
best classification accuracy for the whole network?". Active learning and
inference is a practical framework to study this problem.
A set of methods for active learning and inference for within network
classification is proposed and validated. The utility score calculation for
each node based on network structure is the first step in the process. The
scores enable to rank the nodes. Based on the ranking, a set of nodes, for
which the labels are acquired, is selected (e.g. by taking top or bottom N from
the ranking). The new measure-neighbour methods proposed in the paper suggest
not obtaining labels of nodes from the ranking but rather acquiring labels of
their neighbours. The paper examines 29 distinct formulations of utility score
and selection methods reporting their impact on the results of two collective
classification algorithms: Iterative Classification Algorithm and Loopy Belief
Propagation.
We advocate that the accuracy of presented methods depends on the structural
properties of the examined network. We claim that measure-neighbour methods
will work better than the regular methods for networks with higher clustering
coefficient and worse than regular methods for networks with low clustering
coefficient. According to our hypothesis, based on clustering coefficient we
are able to recommend appropriate active learning and inference method.
| Tomasz Kajdanowicz, Rados{\l}aw Michalski, Katarzyna Musia{\l},
Przemys{\l}aw Kazienko | null | 1510.01270 | null | null |
Tight Variational Bounds via Random Projections and I-Projections | cs.LG | Information projections are the key building block of variational inference
algorithms and are used to approximate a target probabilistic model by
projecting it onto a family of tractable distributions. In general, there is no
guarantee on the quality of the approximation obtained. To overcome this issue,
we introduce a new class of random projections to reduce the dimensionality and
hence the complexity of the original model. In the spirit of random
projections, the projection preserves (with high probability) key properties of
the target distribution. We show that information projections can be combined
with random projections to obtain provable guarantees on the quality of the
approximation obtained, regardless of the complexity of the original model. We
demonstrate empirically that augmenting mean field with a random projection
step dramatically improves partition function and marginal probability
estimates, both on synthetic and real world data.
| Lun-Kai Hsu, Tudor Achim, Stefano Ermon | null | 1510.01308 | null | null |
Batch Normalized Recurrent Neural Networks | stat.ML cs.LG cs.NE | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial.
| C\'esar Laurent, Gabriel Pereyra, Phil\'emon Brakel, Ying Zhang and
Yoshua Bengio | null | 1510.01378 | null | null |
Improved Estimation of Class Prior Probabilities through Unlabeled Data | stat.ML cs.LG | Work in the classification literature has shown that in computing a
classification function, one need not know the class membership of all
observations in the training set; the unlabeled observations still provide
information on the marginal distribution of the feature set, and can thus
contribute to increased classification accuracy for future observations. The
present paper will show that this scheme can also be used for the estimation of
class prior probabilities, which would be very useful in applications in which
it is difficult or expensive to determine class membership. Both parametric and
nonparametric estimators are developed. Asymptotic distributions of the
estimators are derived, and it is proven that the use of the unlabeled
observations does reduce asymptotic variance. This methodology is also extended
to the estimation of subclass probabilities.
| Norman Matloff | null | 1510.01422 | null | null |
A Waveform Representation Framework for High-quality Statistical
Parametric Speech Synthesis | cs.SD cs.LG | State-of-the-art statistical parametric speech synthesis (SPSS) generally
uses a vocoder to represent speech signals and parameterize them into features
for subsequent modeling. Magnitude spectrum has been a dominant feature over
the years. Although perceptual studies have shown that phase spectrum is
essential to the quality of synthesized speech, it is often ignored by using a
minimum phase filter during synthesis and the speech quality suffers. To bypass
this bottleneck in vocoded speech, this paper proposes a phase-embedded
waveform representation framework and establishes a magnitude-phase joint
modeling platform for high-quality SPSS. Our experiments on waveform
reconstruction show that the performance is better than that of the widely-used
STRAIGHT. Furthermore, the proposed modeling and synthesis platform outperforms
a leading-edge, vocoded, deep bidirectional long short-term memory recurrent
neural network (DBLSTM-RNN)-based baseline system in various objective
evaluation metrics conducted.
| Bo Fan, Siu Wa Lee, Xiaohai Tian, Lei Xie and Minghui Dong | null | 1510.01443 | null | null |
Stochastic subGradient Methods with Linear Convergence for Polyhedral
Convex Optimization | cs.LG math.OC | In this paper, we show that simple {Stochastic} subGradient Decent methods
with multiple Restarting, named {\bf RSGD}, can achieve a \textit{linear
convergence rate} for a class of non-smooth and non-strongly convex
optimization problems where the epigraph of the objective function is a
polyhedron, to which we refer as {\bf polyhedral convex optimization}. Its
applications in machine learning include $\ell_1$ constrained or regularized
piecewise linear loss minimization and submodular function minimization. To the
best of our knowledge, this is the first result on the linear convergence rate
of stochastic subgradient methods for non-smooth and non-strongly convex
optimization problems.
| Tianbao Yang, Qihang Lin | null | 1510.01444 | null | null |
Local Rademacher Complexity Bounds based on Covering Numbers | cs.AI cs.LG stat.ML | This paper provides a general result on controlling local Rademacher
complexities, which captures in an elegant form to relate the complexities with
constraint on the expected norm to the corresponding ones with constraint on
the empirical norm. This result is convenient to apply in real applications and
could yield refined local Rademacher complexity bounds for function classes
satisfying general entropy conditions. We demonstrate the power of our
complexity bounds by applying them to derive effective generalization error
bounds.
| Yunwen Lei, Lixin Ding and Yingzhou Bi | null | 1510.01463 | null | null |
Bayesian Markov Blanket Estimation | stat.ML cs.LG | This paper considers a Bayesian view for estimating a sub-network in a Markov
random field. The sub-network corresponds to the Markov blanket of a set of
query variables, where the set of potential neighbours here is big. We
factorize the posterior such that the Markov blanket is conditionally
independent of the network of the potential neighbours. By exploiting this
blockwise decoupling, we derive analytic expressions for posterior
conditionals. Subsequently, we develop an inference scheme which makes use of
the factorization. As a result, estimation of a sub-network is possible without
inferring an entire network. Since the resulting Gibbs sampler scales linearly
with the number of variables, it can handle relatively large neighbourhoods.
The proposed scheme results in faster convergence and superior mixing of the
Markov chain than existing Bayesian network estimation techniques.
| Dinu Kaufmann, Sonali Parbhoo, Aleksander Wieczorek, Sebastian Keller,
David Adametz, Volker Roth | null | 1510.01485 | null | null |
Quantifying Emergent Behavior of Autonomous Robots | cs.IT cs.LG cs.RO math.DS math.IT | Quantifying behaviors of robots which were generated autonomously from
task-independent objective functions is an important prerequisite for objective
comparisons of algorithms and movements of animals. The temporal sequence of
such a behavior can be considered as a time series and hence complexity
measures developed for time series are natural candidates for its
quantification. The predictive information and the excess entropy are such
complexity measures. They measure the amount of information the past contains
about the future and thus quantify the nonrandom structure in the temporal
sequence. However, when using these measures for systems with continuous states
one has to deal with the fact that their values will depend on the resolution
with which the systems states are observed. For deterministic systems both
measures will diverge with increasing resolution. We therefore propose a new
decomposition of the excess entropy in resolution dependent and resolution
independent parts and discuss how they depend on the dimensionality of the
dynamics, correlations and the noise level. For the practical estimation we
propose to use estimates based on the correlation integral instead of the
direct estimation of the mutual information using the algorithm by Kraskov et
al. (2004) which is based on next neighbor statistics because the latter allows
less control of the scale dependencies. Using our algorithm we are able to show
how autonomous learning generates behavior of increasing complexity with
increasing learning duration.
| Georg Martius and Eckehard Olbrich | 10.3390/e17107266 | 1510.01495 | null | null |
Population-Contrastive-Divergence: Does Consistency help with RBM
training? | cs.LG cs.NE stat.ML | Estimating the log-likelihood gradient with respect to the parameters of a
Restricted Boltzmann Machine (RBM) typically requires sampling using Markov
Chain Monte Carlo (MCMC) techniques. To save computation time, the Markov
chains are only run for a small number of steps, which leads to a biased
estimate. This bias can cause RBM training algorithms such as Contrastive
Divergence (CD) learning to deteriorate. We adopt the idea behind Population
Monte Carlo (PMC) methods to devise a new RBM training algorithm termed
Population-Contrastive-Divergence (pop-CD). Compared to CD, it leads to a
consistent estimate and may have a significantly lower bias. Its computational
overhead is negligible compared to CD. However, the variance of the gradient
estimate increases. We experimentally show that pop-CD can significantly
outperform CD. In many cases, we observed a smaller bias and achieved higher
log-likelihood values. However, when the RBM distribution has many hidden
neurons, the consistent estimate of pop-CD may still have a considerable bias
and the variance of the gradient estimate requires a smaller learning rate.
Thus, despite its superior theoretical properties, it is not advisable to use
pop-CD in its current form on large problems.
| Oswin Krause, Asja Fischer, Christian Igel | null | 1510.01624 | null | null |
Large-scale subspace clustering using sketching and validation | cs.LG cs.CV stat.ML | The nowadays massive amounts of generated and communicated data present major
challenges in their processing. While capable of successfully classifying
nonlinearly separable objects in various settings, subspace clustering (SC)
methods incur prohibitively high computational complexity when processing
large-scale data. Inspired by the random sampling and consensus (RANSAC)
approach to robust regression, the present paper introduces a randomized scheme
for SC, termed sketching and validation (SkeVa-)SC, tailored for large-scale
data. At the heart of SkeVa-SC lies a randomized scheme for approximating the
underlying probability density function of the observed data by kernel
smoothing arguments. Sparsity in data representations is also exploited to
reduce the computational burden of SC, while achieving high clustering
accuracy. Performance analysis as well as extensive numerical tests on
synthetic and real data corroborate the potential of SkeVa-SC and its
competitive performance relative to state-of-the-art scalable SC approaches.
Keywords: Subspace clustering, big data, kernel smoothing, randomization,
sketching, validation, sparsity.
| Panagiotis A. Traganitis, Konstantinos Slavakis, Georgios B. Giannakis | null | 1510.01628 | null | null |
Structured Transforms for Small-Footprint Deep Learning | stat.ML cs.CV cs.LG | We consider the task of building compact deep learning pipelines suitable for
deployment on storage and power constrained mobile devices. We propose a
unified framework to learn a broad family of structured parameter matrices that
are characterized by the notion of low displacement rank. Our structured
transforms admit fast function and gradient evaluation, and span a rich range
of parameter sharing configurations whose statistical modeling capacity can be
explicitly tuned along a continuum from structured to unstructured.
Experimental results show that these transforms can significantly accelerate
inference and forward/backward passes during training, and offer superior
accuracy-compactness-speed tradeoffs in comparison to a number of existing
techniques. In keyword spotting applications in mobile speech recognition, our
methods are much more effective than standard linear low-rank bottleneck layers
and nearly retain the performance of state of the art models, while providing
more than 3.5-fold compression.
| Vikas Sindhwani and Tara N. Sainath and Sanjiv Kumar | null | 1510.01722 | null | null |
Efficient Per-Example Gradient Computations | stat.ML cs.LG | This technical report describes an efficient technique for computing the norm
of the gradient of the loss function for a neural network with respect to its
parameters. This gradient norm can be computed efficiently for every example.
| Ian Goodfellow | null | 1510.01799 | null | null |
Stochastic Optimization for Deep CCA via Nonlinear Orthogonal Iterations | cs.LG | Deep CCA is a recently proposed deep neural network extension to the
traditional canonical correlation analysis (CCA), and has been successful for
multi-view representation learning in several domains. However, stochastic
optimization of the deep CCA objective is not straightforward, because it does
not decouple over training examples. Previous optimizers for deep CCA are
either batch-based algorithms or stochastic optimization using large
minibatches, which can have high memory consumption. In this paper, we tackle
the problem of stochastic optimization for deep CCA with small minibatches,
based on an iterative solution to the CCA objective, and show that we can
achieve as good performance as previous optimizers and thus alleviate the
memory requirement.
| Weiran Wang, Raman Arora, Karen Livescu, Nathan Srebro | null | 1510.02054 | null | null |
Data-Efficient Learning of Feedback Policies from Image Pixels using
Deep Dynamical Models | cs.AI cs.CV cs.LG stat.ML | Data-efficient reinforcement learning (RL) in continuous state-action spaces
using very high-dimensional observations remains a key challenge in developing
fully autonomous systems. We consider a particularly important instance of this
challenge, the pixels-to-torques problem, where an RL agent learns a
closed-loop control policy ("torques") from pixel information only. We
introduce a data-efficient, model-based reinforcement learning algorithm that
learns such a closed-loop policy directly from pixel information. The key
ingredient is a deep dynamical model for learning a low-dimensional feature
embedding of images jointly with a predictive model in this low-dimensional
feature space. Joint learning is crucial for long-term predictions, which lie
at the core of the adaptive nonlinear model predictive control strategy that we
use for closed-loop control. Compared to state-of-the-art RL methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces, is lightweight and an important step toward
fully autonomous end-to-end learning from pixels to torques.
| John-Alexander M. Assael, Niklas Wahlstr\"om, Thomas B. Sch\"on, Marc
Peter Deisenroth | null | 1510.02173 | null | null |
Empirical Analysis of Sampling Based Estimators for Evaluating RBMs | cs.LG stat.ML | The Restricted Boltzmann Machines (RBM) can be used either as classifiers or
as generative models. The quality of the generative RBM is measured through the
average log-likelihood on test data. Due to the high computational complexity
of evaluating the partition function, exact calculation of test log-likelihood
is very difficult. In recent years some estimation methods are suggested for
approximate computation of test log-likelihood. In this paper we present an
empirical comparison of the main estimation methods, namely, the AIS algorithm
for estimating the partition function, the CSL method for directly estimating
the log-likelihood, and the RAISE algorithm that combines these two ideas. We
use the MNIST data set to learn the RBM and then compare these methods for
estimating the test log-likelihood.
| Vidyadhar Upadhya, P.S. Sastry | null | 1510.02255 | null | null |
Texture Modelling with Nested High-order Markov-Gibbs Random Fields | cs.CV cs.LG stat.ML | Currently, Markov-Gibbs random field (MGRF) image models which include
high-order interactions are almost always built by modelling responses of a
stack of local linear filters. Actual interaction structure is specified
implicitly by the filter coefficients. In contrast, we learn an explicit
high-order MGRF structure by considering the learning process in terms of
general exponential family distributions nested over base models, so that
potentials added later can build on previous ones. We relatively rapidly add
new features by skipping over the costly optimisation of parameters.
We introduce the use of local binary patterns as features in MGRF texture
models, and generalise them by learning offsets to the surrounding pixels.
These prove effective as high-order features, and are fast to compute. Several
schemes for selecting high-order features by composition or search of a small
subclass are compared. Additionally we present a simple modification of the
maximum likelihood as a texture modelling-specific objective function which
aims to improve generalisation by local windowing of statistics.
The proposed method was experimentally evaluated by learning high-order MGRF
models for a broad selection of complex textures and then performing texture
synthesis, and succeeded on much of the continuum from stochastic through
irregularly structured to near-regular textures. Learning interaction structure
is very beneficial for textures with large-scale structure, although those with
complex irregular structure still provide difficulties. The texture models were
also quantitatively evaluated on two tasks and found to be competitive with
other works: grading of synthesised textures by a panel of observers; and
comparison against several recent MGRF models by evaluation on a constrained
inpainting task.
| Ralph Versteegen, Georgy Gimel'farb, Patricia Riddle | 10.1016/j.cviu.2015.11.003 | 1510.02364 | null | null |
Mapping Unseen Words to Task-Trained Embedding Spaces | cs.CL cs.LG | We consider the supervised training setting in which we learn task-specific
word embeddings. We assume that we start with initial embeddings learned from
unlabelled data and update them to learn task-specific embeddings for words in
the supervised training data. However, for new words in the test set, we must
use either their initial embeddings or a single unknown embedding, which often
leads to errors. We address this by learning a neural network to map from
initial embeddings to the task-specific embedding space, via a multi-loss
objective function. The technique is general, but here we demonstrate its use
for improved dependency parsing (especially for sentences with
out-of-vocabulary words), as well as for downstream improvements on sentiment
analysis.
| Pranava Swaroop Madhyastha, Mohit Bansal, Kevin Gimpel and Karen
Livescu | null | 1510.02387 | null | null |
Distilling Model Knowledge | stat.ML cs.LG | Top-performing machine learning systems, such as deep neural networks, large
ensembles and complex probabilistic graphical models, can be expensive to
store, slow to evaluate and hard to integrate into larger systems. Ideally, we
would like to replace such cumbersome models with simpler models that perform
equally well.
In this thesis, we study knowledge distillation, the idea of extracting the
knowledge contained in a complex model and injecting it into a more convenient
model. We present a general framework for knowledge distillation, whereby a
convenient model of our choosing learns how to mimic a complex model, by
observing the latter's behaviour and being penalized whenever it fails to
reproduce it.
We develop our framework within the context of three distinct machine
learning applications: (a) model compression, where we compress large
discriminative models, such as ensembles of neural networks, into models of
much smaller size; (b) compact predictive distributions for Bayesian inference,
where we distil large bags of MCMC samples into compact predictive
distributions in closed form; (c) intractable generative models, where we
distil unnormalizable models such as RBMs into tractable models such as NADEs.
We contribute to the state of the art with novel techniques and ideas. In
model compression, we describe and implement derivative matching, which allows
for better distillation when data is scarce. In compact predictive
distributions, we introduce online distillation, which allows for significant
savings in memory. Finally, in intractable generative models, we show how to
use distilled models to robustly estimate intractable quantities of the
original model, such as its intractable partition function.
| George Papamakarios | null | 1510.02437 | null | null |
Uniform Learning in a Deep Neural Network via "Oddball" Stochastic
Gradient Descent | cs.LG | When training deep neural networks, it is typically assumed that the training
examples are uniformly difficult to learn. Or, to restate, it is assumed that
the training error will be uniformly distributed across the training examples.
Based on these assumptions, each training example is used an equal number of
times. However, this assumption may not be valid in many cases. "Oddball SGD"
(novelty-driven stochastic gradient descent) was recently introduced to drive
training probabilistically according to the error distribution - training
frequency is proportional to training error magnitude. In this article, using a
deep neural network to encode a video, we show that oddball SGD can be used to
enforce uniform error across the training set.
| Andrew J.R. Simpson | null | 1510.02442 | null | null |
New Optimisation Methods for Machine Learning | cs.LG stat.ML | A thesis submitted for the degree of Doctor of Philosophy of The Australian
National University.
In this work we introduce several new optimisation methods for problems in
machine learning. Our algorithms broadly fall into two categories: optimisation
of finite sums and of graph structured objectives. The finite sum problem is
simply the minimisation of objective functions that are naturally expressed as
a summation over a large number of terms, where each term has a similar or
identical weight. Such objectives most often appear in machine learning in the
empirical risk minimisation framework in the non-online learning setting. The
second category, that of graph structured objectives, consists of objectives
that result from applying maximum likelihood to Markov random field models.
Unlike the finite sum case, all the non-linearity is contained within a
partition function term, which does not readily decompose into a summation.
For the finite sum problem, we introduce the Finito and SAGA algorithms, as
well as variants of each.
For graph-structured problems, we take three complementary approaches. We
look at learning the parameters for a fixed structure, learning the structure
independently, and learning both simultaneously. Specifically, for the combined
approach, we introduce a new method for encouraging graph structures with the
"scale-free" property. For the structure learning problem, we establish
SHORTCUT, a O(n^{2.5}) expected time approximate structure learning method for
Gaussian graphical models. For problems where the structure is known but the
parameters unknown, we introduce an approximate maximum likelihood learning
algorithm that is capable of learning a useful subclass of Gaussian graphical
models.
| Aaron Defazio | null | 1510.02533 | null | null |
Functional Frank-Wolfe Boosting for General Loss Functions | stat.ML cs.LG | Boosting is a generic learning method for classification and regression. Yet,
as the number of base hypotheses becomes larger, boosting can lead to a
deterioration of test performance. Overfitting is an important and ubiquitous
phenomenon, especially in regression settings. To avoid overfitting, we
consider using $l_1$ regularization. We propose a novel Frank-Wolfe type
boosting algorithm (FWBoost) applied to general loss functions. By using
exponential loss, the FWBoost algorithm can be rewritten as a variant of
AdaBoost for binary classification. FWBoost algorithms have exactly the same
form as existing boosting methods, in terms of making calls to a base learning
algorithm with different weights update. This direct connection between
boosting and Frank-Wolfe yields a new algorithm that is as practical as
existing boosting methods but with new guarantees and rates of convergence.
Experimental results show that the test performance of FWBoost is not degraded
with larger rounds in boosting, which is consistent with the theoretical
analysis.
| Chu Wang and Yingfei Wang and Weinan E and Robert Schapire | null | 1510.02558 | null | null |
Technical Report of Participation in Higgs Boson Machine Learning
Challenge | cs.LG | This report entails the detailed description of the approach and
methodologies taken as part of competing in the Higgs Boson Machine Learning
Competition hosted by Kaggle Inc. and organized by CERN et al. It briefly
describes the theoretical background of the problem and the motivation for
taking part in the competition. Furthermore, the various machine learning
models and algorithms analyzed and implemented during the 4 month period of
participation are discussed and compared. Special attention is paid to the Deep
Learning techniques and architectures implemented from scratch using Python and
NumPy for this competition.
| S. Raza Ahmad | null | 1510.02674 | null | null |
Some Theory For Practical Classifier Validation | stat.ML cs.LG | We compare and contrast two approaches to validating a trained classifier
while using all in-sample data for training. One is simultaneous validation
over an organized set of hypotheses (SVOOSH), the well-known method that began
with VC theory. The other is withhold and gap (WAG). WAG withholds a validation
set, trains a holdout classifier on the remaining data, uses the validation
data to validate that classifier, then adds the rate of disagreement between
the holdout classifier and one trained using all in-sample data, which is an
upper bound on the difference in error rates. We show that complex hypothesis
classes and limited training data can make WAG a favorable alternative.
| Eric Bax, Ya Le | null | 1510.02676 | null | null |
Feedforward Sequential Memory Neural Networks without Recurrent Feedback | cs.NE cs.CL cs.LG | We introduce a new structure for memory neural networks, called feedforward
sequential memory networks (FSMN), which can learn long-term dependency without
using recurrent feedback. The proposed FSMN is a standard feedforward neural
networks equipped with learnable sequential memory blocks in the hidden layers.
In this work, we have applied FSMN to several language modeling (LM) tasks.
Experimental results have shown that the memory blocks in FSMN can learn
effective representations of long history. Experiments have shown that FSMN
based language models can significantly outperform not only feedforward neural
network (FNN) based LMs but also the popular recurrent neural network (RNN)
LMs.
| ShiLiang Zhang, Hui Jiang, Si Wei, LiRong Dai | null | 1510.02693 | null | null |
Conditional Risk Minimization for Stochastic Processes | stat.ML cs.LG | We study the task of learning from non-i.i.d. data. In particular, we aim at
learning predictors that minimize the conditional risk for a stochastic
process, i.e. the expected loss of the predictor on the next point conditioned
on the set of training samples observed so far. For non-i.i.d. data, the
training set contains information about the upcoming samples, so learning with
respect to the conditional distribution can be expected to yield better
predictors than one obtains from the classical setting of minimizing the
marginal risk. Our main contribution is a practical estimator for the
conditional risk based on the theory of non-parametric time-series prediction,
and a finite sample concentration bound that establishes uniform convergence of
the estimator to the true conditional risk under certain regularity assumptions
on the process.
| Alexander Zimin, Christoph H. Lampert | null | 1510.02706 | null | null |
Large-scale Artificial Neural Network: MapReduce-based Deep Learning | cs.DC cs.LG cs.NE | Faced with continuously increasing scale of data, original back-propagation
neural network based machine learning algorithm presents two non-trivial
challenges: huge amount of data makes it difficult to maintain both efficiency
and accuracy; redundant data aggravates the system workload. This project is
mainly focused on the solution to the issues above, combining deep learning
algorithm with cloud computing platform to deal with large-scale data. A
MapReduce-based handwriting character recognizer will be designed in this
project to verify the efficiency improvement this mechanism will achieve on
training and practical large-scale data. Careful discussion and experiment will
be developed to illustrate how deep learning algorithm works to train
handwritten digits data, how MapReduce is implemented on deep learning neural
network, and why this combination accelerates computation. Besides performance,
the scalability and robustness will be mentioned in this report as well. Our
system comes with two demonstration software that visually illustrates our
handwritten digit recognition/encoding application.
| Kairan Sun, Xu Wei, Gengtao Jia, Risheng Wang, Ruizhi Li | null | 1510.02709 | null | null |
Early Inference in Energy-Based Models Approximates Back-Propagation | cs.LG | We show that Langevin MCMC inference in an energy-based model with latent
variables has the property that the early steps of inference, starting from a
stationary point, correspond to propagating error gradients into internal
layers, similarly to back-propagation. The error that is back-propagated is
with respect to visible units that have received an outside driving force
pushing them away from the stationary point. Back-propagated error gradients
correspond to temporal derivatives of the activation of hidden units. This
observation could be an element of a theory for explaining how brains perform
credit assignment in deep hierarchies as efficiently as back-propagation does.
In this theory, the continuous-valued latent variables correspond to averaged
voltage potential (across time, spikes, and possibly neurons in the same
minicolumn), and neural computation corresponds to approximate inference and
error back-propagation at the same time.
| Yoshua Bengio and Asja Fischer | null | 1510.02777 | null | null |
On the Complexity of Inner Product Similarity Join | cs.DS cs.DB cs.LG | A number of tasks in classification, information retrieval, recommendation
systems, and record linkage reduce to the core problem of inner product
similarity join (IPS join): identifying pairs of vectors in a collection that
have a sufficiently large inner product. IPS join is well understood when
vectors are normalized and some approximation of inner products is allowed.
However, the general case where vectors may have any length appears much more
challenging. Recently, new upper bounds based on asymmetric locality-sensitive
hashing (ALSH) and asymmetric embeddings have emerged, but little has been
known on the lower bound side. In this paper we initiate a systematic study of
inner product similarity join, showing new lower and upper bounds. Our main
results are:
* Approximation hardness of IPS join in subquadratic time, assuming the
strong exponential time hypothesis.
* New upper and lower bounds for (A)LSH-based algorithms. In particular, we
show that asymmetry can be avoided by relaxing the LSH definition to only
consider the collision probability of distinct elements.
* A new indexing method for IPS based on linear sketches, implying that our
hardness results are not far from being tight.
Our technical contributions include new asymmetric embeddings that may be of
independent interest. At the conceptual level we strive to provide greater
clarity, for example by distinguishing among signed and unsigned variants of
IPS join and shedding new light on the effect of asymmetry.
| Thomas D. Ahle and Rasmus Pagh and Ilya Razenshteyn and Francesco
Silvestri | 10.1145/2902251.2902285 | 1510.02824 | null | null |
On the Definiteness of Earth Mover's Distance and Its Relation to Set
Intersection | cs.LG stat.ML | Positive definite kernels are an important tool in machine learning that
enable efficient solutions to otherwise difficult or intractable problems by
implicitly linearizing the problem geometry. In this paper we develop a
set-theoretic interpretation of the Earth Mover's Distance (EMD) and propose
Earth Mover's Intersection (EMI), a positive definite analog to EMD for sets of
different sizes. We provide conditions under which EMD or certain
approximations to EMD are negative definite. We also present a
positive-definite-preserving transformation that can be applied to any kernel
and can also be used to derive positive definite EMD-based kernels and show
that the Jaccard index is simply the result of this transformation. Finally, we
evaluate kernels based on EMI and the proposed transformation versus EMD in
various computer vision tasks and show that EMD is generally inferior even with
indefinite kernel techniques.
| Andrew Gardner, Christian A. Duncan, Jinko Kanno, and Rastko R. Selmic | 10.1109/TCYB.2017.2761798 | 1510.02833 | null | null |
Active Learning from Weak and Strong Labelers | cs.LG stat.ML | An active learner is given a hypothesis class, a large set of unlabeled
examples and the ability to interactively query labels to an oracle of a subset
of these examples; the goal of the learner is to learn a hypothesis in the
class that fits the data well by making as few label queries as possible.
This work addresses active learning with labels obtained from strong and weak
labelers, where in addition to the standard active learning setting, we have an
extra weak labeler which may occasionally provide incorrect labels. An example
is learning to classify medical images where either expensive labels may be
obtained from a physician (oracle or strong labeler), or cheaper but
occasionally incorrect labels may be obtained from a medical resident (weak
labeler). Our goal is to learn a classifier with low error on data labeled by
the oracle, while using the weak labeler to reduce the number of label queries
made to this labeler. We provide an active learning algorithm for this setting,
establish its statistical consistency, and analyze its label complexity to
characterize when it can provide label savings over using the strong labeler
alone.
| Chicheng Zhang, Kamalika Chaudhuri | null | 1510.02847 | null | null |
AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction
in Structure-based Drug Discovery | cs.LG cs.NE q-bio.BM stat.ML | Deep convolutional neural networks comprise a subclass of deep neural
networks (DNN) with a constrained architecture that leverages the spatial and
temporal structure of the domain they model. Convolutional networks achieve the
best predictive performance in areas such as speech and image recognition by
hierarchically composing simple local features into complex models. Although
DNNs have been used in drug discovery for QSAR and ligand-based bioactivity
predictions, none of these models have benefited from this powerful
convolutional architecture. This paper introduces AtomNet, the first
structure-based, deep convolutional neural network designed to predict the
bioactivity of small molecules for drug discovery applications. We demonstrate
how to apply the convolutional concepts of feature locality and hierarchical
composition to the modeling of bioactivity and chemical interactions. In
further contrast to existing DNN techniques, we show that AtomNet's application
of local convolutional filters to structural target information successfully
predicts new active molecules for targets with no previously known modulators.
Finally, we show that AtomNet outperforms previous docking approaches on a
diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9
on 57.8% of the targets in the DUDE benchmark.
| Izhar Wallach and Michael Dzamba and Abraham Heifets | null | 1510.02855 | null | null |
TSEB: More Efficient Thompson Sampling for Policy Learning | cs.LG | In model-based solution approaches to the problem of learning in an unknown
environment, exploring to learn the model parameters takes a toll on the
regret. The optimal performance with respect to regret or PAC bounds is
achievable, if the algorithm exploits with respect to reward or explores with
respect to the model parameters, respectively. In this paper, we propose TSEB,
a Thompson Sampling based algorithm with adaptive exploration bonus that aims
to solve the problem with tighter PAC guarantees, while being cautious on the
regret as well. The proposed approach maintains distributions over the model
parameters which are successively refined with more experience. At any given
time, the agent solves a model sampled from this distribution, and the sampled
reward distribution is skewed by an exploration bonus in order to generate more
informative exploration. The policy by solving is then used for generating more
experience that helps in updating the posterior over the model parameters. We
provide a detailed analysis of the PAC guarantees, and convergence of the
proposed approach. We show that our adaptive exploration bonus encourages the
additional exploration required for better PAC bounds on the algorithm. We
provide empirical analysis on two different simulated domains.
| P. Prasanna, Sarath Chandar, Balaraman Ravindran | null | 1510.02874 | null | null |
Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive
Transfer from multiple sources in the same domain | cs.AI cs.LG | Transferring knowledge from prior source tasks in solving a new target task
can be useful in several learning applications. The application of transfer
poses two serious challenges which have not been adequately addressed. First,
the agent should be able to avoid negative transfer, which happens when the
transfer hampers or slows down the learning instead of helping it. Second, the
agent should be able to selectively transfer, which is the ability to select
and transfer from different and multiple source tasks for different parts of
the state space of the target task. We propose A2T (Attend, Adapt and
Transfer), an attentive deep architecture which adapts and transfers from these
source tasks. Our model is generic enough to effect transfer of either policies
or value functions. Empirical evaluations on different learning algorithms show
that A2T is an effective architecture for transfer by being able to avoid
negative transfer while transferring selectively from multiple source tasks in
the same domain.
| Janarthanan Rajendran, Aravind Srinivas, Mitesh M. Khapra, P Prasanna,
Balaraman Ravindran | null | 1510.02879 | null | null |
Survey on Feature Selection | cs.LG | Feature selection plays an important role in the data mining process. It is
needed to deal with the excessive number of features, which can become a
computational burden on the learning algorithms. It is also necessary, even
when computational resources are not scarce, since it improves the accuracy of
the machine learning tasks, as we will see in the upcoming sections. In this
review, we discuss the different feature selection approaches, and the relation
between them and the various machine learning algorithms.
| Tarek Amr Abdallah, Beatriz de La Iglesia | null | 1510.02892 | null | null |
Evaluation of Joint Multi-Instance Multi-Label Learning For Breast
Cancer Diagnosis | cs.CV cs.LG | Multi-instance multi-label (MIML) learning is a challenging problem in many
aspects. Such learning approaches might be useful for many medical diagnosis
applications including breast cancer detection and classification. In this
study subset of digiPATH dataset (whole slide digital breast cancer
histopathology images) are used for training and evaluation of six
state-of-the-art MIML methods.
At the end, performance comparison of these approaches are given by means of
effective evaluation metrics. It is shown that MIML-kNN achieve the best
performance that is %65.3 average precision, where most of other methods attain
acceptable results as well.
| Baris Gecer, Ozge Yalcinkaya, Onur Tasar and Selim Aksoy | null | 1510.02942 | null | null |
Do Deep Neural Networks Learn Facial Action Units When Doing Expression
Recognition? | cs.CV cs.LG cs.NE | Despite being the appearance-based classifier of choice in recent years,
relatively few works have examined how much convolutional neural networks
(CNNs) can improve performance on accepted expression recognition benchmarks
and, more importantly, examine what it is they actually learn. In this work,
not only do we show that CNNs can achieve strong performance, but we also
introduce an approach to decipher which portions of the face influence the
CNN's predictions. First, we train a zero-bias CNN on facial expression data
and achieve, to our knowledge, state-of-the-art performance on two expression
recognition benchmarks: the extended Cohn-Kanade (CK+) dataset and the Toronto
Face Dataset (TFD). We then qualitatively analyze the network by visualizing
the spatial patterns that maximally excite different neurons in the
convolutional layers and show how they resemble Facial Action Units (FAUs).
Finally, we use the FAU labels provided in the CK+ dataset to verify that the
FAUs observed in our filter visualizations indeed align with the subject's
facial movements.
| Pooya Khorrami, Tom Le Paine, Thomas S. Huang | null | 1510.02969 | null | null |
OmniGraph: Rich Representation and Graph Kernel Learning | cs.CL cs.LG | OmniGraph, a novel representation to support a range of NLP classification
tasks, integrates lexical items, syntactic dependencies and frame semantic
parses into graphs. Feature engineering is folded into the learning through
convolution graph kernel learning to explore different extents of the graph. A
high-dimensional space of features includes individual nodes as well as complex
subgraphs. In experiments on a text-forecasting problem that predicts stock
price change from news for company mentions, OmniGraph beats several benchmarks
based on bag-of-words, syntactic dependencies, and semantic trees. The highly
expressive features OmniGraph discovers provide insights into the semantics
across distinct market sectors. To demonstrate the method's generality, we also
report its high performance results on a fine-grained sentiment corpus.
| Boyi Xie and Rebecca J. Passonneau | null | 1510.02983 | null | null |
Neural Networks with Few Multiplications | cs.LG cs.NE | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks.
| Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | null | 1510.03009 | null | null |
On Correcting Inputs: Inverse Optimization for Online Structured
Prediction | cs.LG | Algorithm designers typically assume that the input data is correct, and then
proceed to find "optimal" or "sub-optimal" solutions using this input data.
However this assumption of correct data does not always hold in practice,
especially in the context of online learning systems where the objective is to
learn appropriate feature weights given some training samples. Such scenarios
necessitate the study of inverse optimization problems where one is given an
input instance as well as a desired output and the task is to adjust the input
data so that the given output is indeed optimal. Motivated by learning
structured prediction models, in this paper we consider inverse optimization
with a margin, i.e., we require the given output to be better than all other
feasible outputs by a desired margin. We consider such inverse optimization
problems for maximum weight matroid basis, matroid intersection, perfect
matchings, minimum cost maximum flows, and shortest paths and derive the first
known results for such problems with a non-zero margin. The effectiveness of
these algorithmic approaches to online learning for structured prediction is
also discussed.
| Hal Daum\'e III, Samir Khuller, Manish Purohit, and Gregory Sanders | null | 1510.03130 | null | null |
Context-Aware Bandits | cs.LG cs.AI stat.ML | We propose an efficient Context-Aware clustering of Bandits (CAB) algorithm,
which can capture collaborative effects. CAB can be easily deployed in a
real-world recommendation system, where multi-armed bandits have been shown to
perform well in particular with respect to the cold-start problem. CAB utilizes
a context-aware clustering augmented by exploration-exploitation strategies.
CAB dynamically clusters the users based on the content universe under
consideration. We give a theoretical analysis in the standard stochastic
multi-armed bandits setting. We show the efficiency of our approach on
production and real-world datasets, demonstrate the scalability, and, more
importantly, the significant increased prediction performance against several
state-of-the-art methods.
| Shuai Li and Purushottam Kar | null | 1510.03164 | null | null |
VB calibration to improve the interface between phone recognizer and
i-vector extractor | stat.ML cs.LG | The EM training algorithm of the classical i-vector extractor is often
incorrectly described as a maximum-likelihood method. The i-vector model is
however intractable: the likelihood itself and the hidden-variable posteriors
needed for the EM algorithm cannot be computed in closed form. We show here
that the classical i-vector extractor recipe is actually a mean-field
variational Bayes (VB) recipe.
This theoretical VB interpretation turns out to be of further use, because it
also offers an interpretation of the newer phonetic i-vector extractor recipe,
thereby unifying the two flavours of extractor.
More importantly, the VB interpretation is also practically useful: it
suggests ways of modifying existing i-vector extractors to make them more
accurate. In particular, in existing methods, the approximate VB posterior for
the GMM states is fixed, while only the parameters of the generative model are
adapted. Here we explore the possibility of also mildly adjusting (calibrating)
those posteriors, so that they better fit the generative model.
| Niko Br\"ummer | null | 1510.03203 | null | null |
The Inductive Constraint Programming Loop | cs.AI cs.LG | Constraint programming is used for a variety of real-world optimisation
problems, such as planning, scheduling and resource allocation problems. At the
same time, one continuously gathers vast amounts of data about these problems.
Current constraint programming software does not exploit such data to update
schedules, resources and plans. We propose a new framework, that we call the
Inductive Constraint Programming loop. In this approach data is gathered and
analyzed systematically, in order to dynamically revise and adapt constraints
and optimization criteria. Inductive Constraint Programming aims at bridging
the gap between the areas of data mining and machine learning on the one hand,
and constraint programming on the other hand.
| Christian Bessiere, Luc De Raedt, Tias Guns, Lars Kotthoff, Mirco
Nanni, Siegfried Nijssen, Barry O'Sullivan, Anastasia Paparrizou, Dino
Pedreschi, Helmut Simonis | null | 1510.03317 | null | null |
Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly
Benchmark | cs.AI cs.LG | Much of the world's data is streaming, time-series data, where anomalies give
significant information in critical situations; examples abound in domains such
as finance, IT, security, medical, and energy. Yet detecting anomalies in
streaming data is a difficult task, requiring detectors to process data in
real-time, not batches, and learn while simultaneously making predictions.
There are no benchmarks to adequately test and score the efficacy of real-time
anomaly detectors. Here we propose the Numenta Anomaly Benchmark (NAB), which
attempts to provide a controlled and repeatable environment of open-source
tools to test and measure anomaly detection algorithms on streaming data. The
perfect detector would detect all anomalies as soon as possible, trigger no
false alarms, work with real-world time-series data across a variety of
domains, and automatically adapt to changing statistics. Rewarding these
characteristics is formalized in NAB, using a scoring algorithm designed for
streaming data. NAB evaluates detectors on a benchmark dataset with labeled,
real-world time-series data. We present these components, and give results and
analyses for several open source, commercially-used algorithms. The goal for
NAB is to provide a standard, open source framework with which the research
community can compare and evaluate different algorithms for detecting anomalies
in streaming data.
| Alexander Lavin, Subutai Ahmad | 10.1109/ICMLA.2015.141 | 1510.03336 | null | null |
Toward a Better Understanding of Leaderboard | stat.ML cs.LG stat.AP | The leaderboard in machine learning competitions is a tool to show the
performance of various participants and to compare them. However, the
leaderboard quickly becomes no longer accurate, due to hack or overfitting.
This article gives two pieces of advice to prevent easy hack or overfitting. By
following these advice, we reach the conclusion that something like the Ladder
leaderboard introduced in [blum2015ladder] is inevitable. With this
understanding, we naturally simplify Ladder by eliminating its redundant
computation and explain how to choose the parameter and interpret it. We also
prove that the sample complexity is cubic to the desired precision of the
leaderboard.
| Wenjie Zheng | null | 1510.03349 | null | null |
Asymptotic Logical Uncertainty and The Benford Test | cs.LG cs.AI | We give an algorithm A which assigns probabilities to logical sentences. For
any simple infinite sequence of sentences whose truth-values appear
indistinguishable from a biased coin that outputs "true" with probability p, we
have that the sequence of probabilities that A assigns to these sentences
converges to p.
| Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant,
George Koleszarik, Evan Lloyd | null | 1510.03370 | null | null |
The intrinsic value of HFO features as a biomarker of epileptic activity | q-bio.NC cs.LG stat.ML | High frequency oscillations (HFOs) are a promising biomarker of epileptic
brain tissue and activity. HFOs additionally serve as a prototypical example of
challenges in the analysis of discrete events in high-temporal resolution,
intracranial EEG data. Two primary challenges are 1) dimensionality reduction,
and 2) assessing feasibility of classification. Dimensionality reduction
assumes that the data lie on a manifold with dimension less than that of the
feature space. However, previous HFO analyses have assumed a linear manifold,
global across time, space (i.e. recording electrode/channel), and individual
patients. Instead, we assess both a) whether linear methods are appropriate and
b) the consistency of the manifold across time, space, and patients. We also
estimate bounds on the Bayes classification error to quantify the distinction
between two classes of HFOs (those occurring during seizures and those
occurring due to other processes). This analysis provides the foundation for
future clinical use of HFO features and buides the analysis for other discrete
events, such as individual action potentials or multi-unit activity.
| Stephen V. Gliske, Kevin R. Moon, William C. Stacey, Alfred O. Hero
III | 10.1109/ICASSP.2016.7472887 | 1510.03507 | null | null |
$\ell_1$-regularized Neural Networks are Improperly Learnable in
Polynomial Time | cs.LG | We study the improper learning of multi-layer neural networks. Suppose that
the neural network to be learned has $k$ hidden layers and that the
$\ell_1$-norm of the incoming weights of any neuron is bounded by $L$. We
present a kernel-based method, such that with probability at least $1 -
\delta$, it learns a predictor whose generalization error is at most $\epsilon$
worse than that of the neural network. The sample complexity and the time
complexity of the presented method are polynomial in the input dimension and in
$(1/\epsilon,\log(1/\delta),F(k,L))$, where $F(k,L)$ is a function depending on
$(k,L)$ and on the activation function, independent of the number of neurons.
The algorithm applies to both sigmoid-like activation functions and ReLU-like
activation functions. It implies that any sufficiently sparse neural network is
learnable in polynomial time.
| Yuchen Zhang, Jason D. Lee, Michael I. Jordan | null | 1510.03528 | null | null |
Elastic regularization in restricted Boltzmann machines: Dealing with
$p\gg N$ | cs.LG | Restricted Boltzmann machines (RBMs) are endowed with the universal power of
modeling (binary) joint distributions. Meanwhile, as a result of their
confining network structure, training RBMs confronts less difficulties
(compared with more complicated models, e.g., Boltzmann machines) when dealing
with approximation and inference issues. However, in certain computational
biology scenarios, such as the cancer data analysis, employing RBMs to model
data features may lose its efficacy due to the "$p\gg N$" problem, in which the
number of features/predictors is much larger than the sample size. The "$p\gg
N$" problem puts the bias-variance trade-off in a more crucial place when
designing statistical learning methods. In this manuscript, we try to address
this problem by proposing a novel RBM model, called elastic restricted
Boltzmann machine (eRBM), which incorporates the elastic regularization term
into the likelihood/cost function. We provide several theoretical analysis on
the superiority of our model. Furthermore, attributed to the classic
contrastive divergence (CD) algorithm, eRBMs can be trained efficiently. Our
novel model is a promising method for future cancer data analysis.
| Sai Zhang | null | 1510.03623 | null | null |
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional
Neural Networks for Sentence Classification | cs.CL cs.LG cs.NE | Convolutional Neural Networks (CNNs) have recently achieved remarkably strong
performance on the practically important task of sentence classification (kim
2014, kalchbrenner 2014, johnson 2014). However, these models require
practitioners to specify an exact model architecture and set accompanying
hyperparameters, including the filter region size, regularization parameters,
and so on. It is currently unknown how sensitive model performance is to
changes in these configurations for the task of sentence classification. We
thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of
architecture components on model performance; our aim is to distinguish between
important and comparatively inconsequential design decisions for sentence
classification. We focus on one-layer CNNs (to the exclusion of more complex
models) due to their comparative simplicity and strong empirical performance,
which makes it a modern standard baseline method akin to Support Vector Machine
(SVMs) and logistic regression. We derive practical advice from our extensive
empirical results for those interested in getting the most out of CNNs for
sentence classification in real world settings.
| Ye Zhang and Byron Wallace | null | 1510.03820 | null | null |
Adopting Robustness and Optimality in Fitting and Learning | cs.LG cs.NE math.OC | We generalized a modified exponentialized estimator by pushing the
robust-optimal (RO) index $\lambda$ to $-\infty$ for achieving robustness to
outliers by optimizing a quasi-Minimin function. The robustness is realized and
controlled adaptively by the RO index without any predefined threshold.
Optimality is guaranteed by expansion of the convexity region in the Hessian
matrix to largely avoid local optima. Detailed quantitative analysis on both
robustness and optimality are provided. The results of proposed experiments on
fitting tasks for three noisy non-convex functions and the digits recognition
task on the MNIST dataset consolidate the conclusions.
| Zhiguang Wang, Tim Oates, James Lo | null | 1510.03826 | null | null |
On Equivalence of Martingale Tail Bounds and Deterministic Regret
Inequalities | math.PR cs.LG stat.ML | We study an equivalence of (i) deterministic pathwise statements appearing in
the online learning literature (termed \emph{regret bounds}), (ii)
high-probability tail bounds for the supremum of a collection of martingales
(of a specific form arising from uniform laws of large numbers for
martingales), and (iii) in-expectation bounds for the supremum. By virtue of
the equivalence, we prove exponential tail bounds for norms of Banach space
valued martingales via deterministic regret bounds for the online mirror
descent algorithm with an adaptive step size. We extend these results beyond
the linear structure of the Banach space: we define a notion of
\emph{martingale type} for general classes of real-valued functions and show
its equivalence (up to a logarithmic factor) to various sequential complexities
of the class (in particular, the sequential Rademacher complexity and its
offset version). For classes with the general martingale type 2, we exhibit a
finer notion of variation that allows partial adaptation to the function
indexing the martingale. Our proof technique rests on sequential symmetrization
and on certifying the \emph{existence} of regret minimization strategies for
certain online prediction problems.
| Alexander Rakhlin, Karthik Sridharan | null | 1510.03925 | null | null |
A Bayesian Network Model for Interesting Itemsets | stat.ML cs.DB cs.LG | Mining itemsets that are the most interesting under a statistical model of
the underlying data is a commonly used and well-studied technique for
exploratory data analysis, with the most recent interestingness models
exhibiting state of the art performance. Continuing this highly promising line
of work, we propose the first, to the best of our knowledge, generative model
over itemsets, in the form of a Bayesian network, and an associated novel
measure of interestingness. Our model is able to efficiently infer interesting
itemsets directly from the transaction database using structural EM, in which
the E-step employs the greedy approximation to weighted set cover. Our approach
is theoretically simple, straightforward to implement, trivially parallelizable
and retrieves itemsets whose quality is comparable to, if not better than,
existing state of the art algorithms as we demonstrate on several real-world
datasets.
| Jaroslav Fowkes and Charles Sutton | 10.1007/978-3-319-46227-1_26 | 1510.04130 | null | null |
Embarrassingly Parallel Variational Inference in Nonconjugate Models | stat.ML cs.AI cs.DC cs.LG stat.CO | We develop a parallel variational inference (VI) procedure for use in
data-distributed settings, where each machine only has access to a subset of
data and runs VI independently, without communicating with other machines. This
type of "embarrassingly parallel" procedure has recently been developed for
MCMC inference algorithms; however, in many cases it is not possible to
directly extend this procedure to VI methods without requiring certain
restrictive exponential family conditions on the form of the model.
Furthermore, most existing (nonparallel) VI methods are restricted to use on
conditionally conjugate models, which limits their applicability. To combat
these issues, we make use of the recently proposed nonparametric VI to
facilitate an embarrassingly parallel VI procedure that can be applied to a
wider scope of models, including to nonconjugate models. We derive our
embarrassingly parallel VI algorithm, analyze our method theoretically, and
demonstrate our method empirically on a few nonconjugate models.
| Willie Neiswanger, Chong Wang, Eric Xing | null | 1510.04163 | null | null |
Improving Back-Propagation by Adding an Adversarial Gradient | stat.ML cs.LG | The back-propagation algorithm is widely used for learning in artificial
neural networks. A challenge in machine learning is to create models that
generalize to new data samples not seen in the training data. Recently, a
common flaw in several machine learning algorithms was discovered: small
perturbations added to the input data lead to consistent misclassification of
data samples. Samples that easily mislead the model are called adversarial
examples. Training a "maxout" network on adversarial examples has shown to
decrease this vulnerability, but also increase classification performance. This
paper shows that adversarial training has a regularizing effect also in
networks with logistic, hyperbolic tangent and rectified linear units. A simple
extension to the back-propagation method is proposed, that adds an adversarial
gradient to the training. The extension requires an additional forward and
backward pass to calculate a modified input sample, or mini batch, used as
input for standard back-propagation learning. The first experimental results on
MNIST show that the "adversarial back-propagation" method increases the
resistance to adversarial examples and boosts the classification performance.
The extension reduces the classification error on the permutation invariant
MNIST from 1.60% to 0.95% in a logistic network, and from 1.40% to 0.78% in a
network with rectified linear units. Results on CIFAR-10 indicate that the
method has a regularizing effect similar to dropout in fully connected
networks. Based on these promising results, adversarial back-propagation is
proposed as a stand-alone regularizing method that should be further
investigated.
| Arild N{\o}kland | null | 1510.04189 | null | null |
Group-Invariant Subspace Clustering | cs.IT cs.LG math.IT stat.ML | In this paper we consider the problem of group invariant subspace clustering
where the data is assumed to come from a union of group-invariant subspaces of
a vector space, i.e. subspaces which are invariant with respect to action of a
given group. Algebraically, such group-invariant subspaces are also referred to
as submodules. Similar to the well known Sparse Subspace Clustering approach
where the data is assumed to come from a union of subspaces, we analyze an
algorithm which, following a recent work [1], we refer to as Sparse Sub-module
Clustering (SSmC). The method is based on finding group-sparse
self-representation of data points. In this paper we primarily derive general
conditions under which such a group-invariant subspace identification is
possible. In particular we extend the geometric analysis in [2] and in the
process we identify a related problem in geometric functional analysis.
| Shuchin Aeron and Eric Kernfeld | null | 1510.04356 | null | null |
Scatter Component Analysis: A Unified Framework for Domain Adaptation
and Domain Generalization | cs.CV cs.AI cs.LG stat.ML | This paper addresses classification tasks on a particular target domain in
which labeled training data are only available from source domains different
from (but related to) the target. Two closely related frameworks, domain
adaptation and domain generalization, are concerned with such tasks, where the
only difference between those frameworks is the availability of the unlabeled
target data: domain adaptation can leverage unlabeled target information, while
domain generalization cannot. We propose Scatter Component Analyis (SCA), a
fast representation learning algorithm that can be applied to both domain
adaptation and domain generalization. SCA is based on a simple geometrical
measure, i.e., scatter, which operates on reproducing kernel Hilbert space. SCA
finds a representation that trades between maximizing the separability of
classes, minimizing the mismatch between domains, and maximizing the
separability of data; each of which is quantified through scatter. The
optimization problem of SCA can be reduced to a generalized eigenvalue problem,
which results in a fast and exact solution. Comprehensive experiments on
benchmark cross-domain object recognition datasets verify that SCA performs
much faster than several state-of-the-art algorithms and also provides
state-of-the-art classification accuracy in both domain adaptation and domain
generalization. We also show that scatter can be used to establish a
theoretical generalization bound in the case of domain adaptation.
| Muhammad Ghifary and David Balduzzi and W. Bastiaan Kleijn and Mengjie
Zhang | null | 1510.04373 | null | null |
Dual Principal Component Pursuit | cs.CV cs.LG | We consider the problem of learning a linear subspace from data corrupted by
outliers. Classical approaches are typically designed for the case in which the
subspace dimension is small relative to the ambient dimension. Our approach
works with a dual representation of the subspace and hence aims to find its
orthogonal complement; as such, it is particularly suitable for subspaces whose
dimension is close to the ambient dimension (subspaces of high relative
dimension). We pose the problem of computing normal vectors to the inlier
subspace as a non-convex $\ell_1$ minimization problem on the sphere, which we
call Dual Principal Component Pursuit (DPCP) problem. We provide theoretical
guarantees under which every global solution to DPCP is a vector in the
orthogonal complement of the inlier subspace. Moreover, we relax the non-convex
DPCP problem to a recursion of linear programs whose solutions are shown to
converge in a finite number of steps to a vector orthogonal to the subspace. In
particular, when the inlier subspace is a hyperplane, the solutions to the
recursion of linear programs converge to the global minimum of the non-convex
DPCP problem in a finite number of steps. We also propose algorithms based on
alternating minimization and iteratively re-weighted least squares, which are
suitable for dealing with large-scale data. Experiments on synthetic data show
that the proposed methods are able to handle more outliers and higher relative
dimensions than current state-of-the-art methods, while experiments in the
context of the three-view geometry problem in computer vision suggest that the
proposed methods can be a useful or even superior alternative to traditional
RANSAC-based approaches for computer vision and other applications.
| Manolis C. Tsakiris and Rene Vidal | null | 1510.04390 | null | null |
Filtrated Spectral Algebraic Subspace Clustering | cs.CV cs.LG | Algebraic Subspace Clustering (ASC) is a simple and elegant method based on
polynomial fitting and differentiation for clustering noiseless data drawn from
an arbitrary union of subspaces. In practice, however, ASC is limited to
equi-dimensional subspaces because the estimation of the subspace dimension via
algebraic methods is sensitive to noise. This paper proposes a new ASC
algorithm that can handle noisy data drawn from subspaces of arbitrary
dimensions. The key ideas are (1) to construct, at each point, a decreasing
sequence of subspaces containing the subspace passing through that point; (2)
to use the distances from any other point to each subspace in the sequence to
construct a subspace clustering affinity, which is superior to alternative
affinities both in theory and in practice. Experiments on the Hopkins 155
dataset demonstrate the superiority of the proposed method with respect to
sparse and low rank subspace clustering methods.
| Manolis C. Tsakiris and Rene Vidal | null | 1510.04396 | null | null |
Online Markov decision processes with policy iteration | cs.LG | The online Markov decision process (MDP) is a generalization of the classical
Markov decision process that incorporates changing reward functions. In this
paper, we propose practical online MDP algorithms with policy iteration and
theoretically establish a sublinear regret bound. A notable advantage of the
proposed algorithm is that it can be easily combined with function
approximation, and thus large and possibly continuous state spaces can be
efficiently handled. Through experiments, we demonstrate the usefulness of the
proposed algorithm.
| Yao Ma, Hao Zhang, Masashi Sugiyama | null | 1510.04454 | null | null |
Layer-Specific Adaptive Learning Rates for Deep Networks | cs.CV cs.AI cs.LG cs.NE | The increasing complexity of deep learning architectures is resulting in
training time requiring weeks or even months. This slow training is due in part
to vanishing gradients, in which the gradients used by back-propagation are
extremely large for weights connecting deep layers (layers near the output
layer), and extremely small for shallow layers (near the input layer); this
results in slow learning in the shallow layers. Additionally, it has also been
shown that in highly non-convex problems, such as deep neural networks, there
is a proliferation of high-error low curvature saddle points, which slows down
learning dramatically. In this paper, we attempt to overcome the two above
problems by proposing an optimization method for training deep neural networks
which uses learning rates which are both specific to each layer in the network
and adaptive to the curvature of the function, increasing the learning rate at
low curvature points. This enables us to speed up learning in the shallow
layers of the network and quickly escape high-error low curvature saddle
points. We test our method on standard image classification datasets such as
MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy
as well as reduces the required training time over standard algorithms.
| Bharat Singh, Soham De, Yangmuzi Zhang, Thomas Goldstein, and Gavin
Taylor | null | 1510.04609 | null | null |
Multilingual Image Description with Neural Sequence Models | cs.CL cs.CV cs.LG cs.NE | In this paper we present an approach to multi-language image description
bringing together insights from neural machine translation and neural image
description. To create a description of an image for a given target language,
our sequence generation models condition on feature vectors from the image, the
description from the source language, and/or a multimodal vector computed over
the image and a description in the source language. In image description
experiments on the IAPR-TC12 dataset of images aligned with English and German
sentences, we find significant and substantial improvements in BLEU4 and Meteor
scores for models trained over multiple languages, compared to a monolingual
baseline.
| Desmond Elliott, Stella Frank, Eva Hasler | null | 1510.04709 | null | null |
Tensor vs Matrix Methods: Robust Tensor Decomposition under Block Sparse
Perturbations | cs.LG cs.IT math.IT stat.ML | Robust tensor CP decomposition involves decomposing a tensor into low rank
and sparse components. We propose a novel non-convex iterative algorithm with
guaranteed recovery. It alternates between low-rank CP decomposition through
gradient ascent (a variant of the tensor power method), and hard thresholding
of the residual. We prove convergence to the globally optimal solution under
natural incoherence conditions on the low rank component, and bounded level of
sparse perturbations. We compare our method with natural baselines which apply
robust matrix PCA either to the {\em flattened} tensor, or to the matrix slices
of the tensor. Our method can provably handle a far greater level of
perturbation when the sparse tensor is block-structured. This naturally occurs
in many applications such as the activity detection task in videos. Our
experiments validate these findings. Thus, we establish that tensor methods can
tolerate a higher level of gross corruptions compared to matrix methods.
| Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan | null | 1510.04747 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.