title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Regularization in Relevance Learning Vector Quantization Using l one
Norms | stat.ML cs.LG | We propose in this contribution a method for l one regularization in
prototype based relevance learning vector quantization (LVQ) for sparse
relevance profiles. Sparse relevance profiles in hyperspectral data analysis
fade down those spectral bands which are not necessary for classification. In
particular, we consider the sparsity in the relevance profile enforced by LASSO
optimization. The latter one is obtained by a gradient learning scheme using a
differentiable parametrized approximation of the $l_{1}$-norm, which has an
upper error bound. We extend this regularization idea also to the matrix
learning variant of LVQ as the natural generalization of relevance learning.
| Martin Riedel, Marika K\"astner, Fabrice Rossi (SAMM), Thomas Villmann | null | 1310.5095 | null | null |
Explore or exploit? A generic model and an exactly solvable case | cond-mat.dis-nn cs.LG physics.soc-ph q-fin.GN | Finding a good compromise between the exploitation of known resources and the
exploration of unknown, but potentially more profitable choices, is a general
problem, which arises in many different scientific disciplines. We propose a
stylized model for these exploration-exploitation situations, including
population or economic growth, portfolio optimisation, evolutionary dynamics,
or the problem of optimal pinning of vortices or dislocations in disordered
materials. We find the exact growth rate of this model for tree-like geometries
and prove the existence of an optimal migration rate in this case. Numerical
simulations in the one-dimensional case confirm the generic existence of an
optimum.
| Thomas Gueudr\'e and Alexander Dobrinevski and Jean-Philippe Bouchaud | 10.1103/PhysRevLett.112.050602 | 1310.5114 | null | null |
GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian
Processes | stat.ML cs.AI cs.LG stat.ME | Gaussian processes are typically used for smoothing and interpolation on
small datasets. We introduce a new Bayesian nonparametric framework -- GPatt --
enabling automatic pattern extrapolation with Gaussian processes on large
multidimensional datasets. GPatt unifies and extends highly expressive kernels
and fast exact inference techniques. Without human intervention -- no hand
crafting of kernel features, and no sophisticated initialisation procedures --
we show that GPatt can solve large scale pattern extrapolation, inpainting, and
kernel discovery problems, including a problem with 383400 training points. We
find that GPatt significantly outperforms popular alternative scalable Gaussian
process methods in speed and accuracy. Moreover, we discover profound
differences between each of these methods, suggesting expressive kernels,
nonparametric representations, and exact inference are useful for modelling
large scale multidimensional patterns.
| Andrew Gordon Wilson, Elad Gilboa, Arye Nehorai, John P. Cunningham | null | 1310.5288 | null | null |
Bayesian Extensions of Kernel Least Mean Squares | stat.ML cs.LG | The kernel least mean squares (KLMS) algorithm is a computationally efficient
nonlinear adaptive filtering method that "kernelizes" the celebrated (linear)
least mean squares algorithm. We demonstrate that the least mean squares
algorithm is closely related to the Kalman filtering, and thus, the KLMS can be
interpreted as an approximate Bayesian filtering method. This allows us to
systematically develop extensions of the KLMS by modifying the underlying
state-space and observation models. The resulting extensions introduce many
desirable properties such as "forgetting", and the ability to learn from
discrete data, while retaining the computational simplicity and time complexity
of the original algorithm.
| Il Memming Park, Sohan Seth, Steven Van Vaerenbergh | null | 1310.5347 | null | null |
Multi-Task Regularization with Covariance Dictionary for Linear
Classifiers | cs.LG | In this paper we propose a multi-task linear classifier learning problem
called D-SVM (Dictionary SVM). D-SVM uses a dictionary of parameter covariance
shared by all tasks to do multi-task knowledge transfer among different tasks.
We formally define the learning problem of D-SVM and show two interpretations
of this problem, from both the probabilistic and kernel perspectives. From the
probabilistic perspective, we show that our learning formulation is actually a
MAP estimation on all optimization variables. We also show its equivalence to a
multiple kernel learning problem in which one is trying to find a re-weighting
kernel for features from a dictionary of basis (despite the fact that only
linear classifiers are learned). Finally, we describe an alternative
optimization scheme to minimize the objective function and present empirical
studies to valid our algorithm.
| Fanyi Xiao, Ruikun Luo, Zhiding Yu | null | 1310.5393 | null | null |
MLI: An API for Distributed Machine Learning | cs.LG cs.DC stat.ML | MLI is an Application Programming Interface designed to address the
challenges of building Machine Learn- ing algorithms in a distributed setting
based on data-centric computing. Its primary goal is to simplify the
development of high-performance, scalable, distributed algorithms. Our initial
results show that, relative to existing systems, this interface can be used to
build distributed implementations of a wide variety of common Machine Learning
algorithms with minimal complexity and highly competitive performance and
scalability.
| Evan R. Sparks, Ameet Talwalkar, Virginia Smith, Jey Kottalam, Xinghao
Pan, Joseph Gonzalez, Michael J. Franklin, Michael I. Jordan, Tim Kraska | null | 1310.5426 | null | null |
Learning Theory and Algorithms for Revenue Optimization in Second-Price
Auctions with Reserve | cs.LG | Second-price auctions with reserve play a critical role for modern search
engine and popular online sites since the revenue of these companies often
directly de- pends on the outcome of such auctions. The choice of the reserve
price is the main mechanism through which the auction revenue can be influenced
in these electronic markets. We cast the problem of selecting the reserve price
to optimize revenue as a learning problem and present a full theoretical
analysis dealing with the complex properties of the corresponding loss
function. We further give novel algorithms for solving this problem and report
the results of several experiments in both synthetic and real data
demonstrating their effectiveness.
| Mehryar Mohri and Andres Mu\~noz Medina | null | 1310.5665 | null | null |
Stochastic Gradient Descent, Weighted Sampling, and the Randomized
Kaczmarz algorithm | math.NA cs.CV cs.LG math.OC stat.ML | We obtain an improved finite-sample guarantee on the linear convergence of
stochastic gradient descent for smooth and strongly convex objectives,
improving from a quadratic dependence on the conditioning $(L/\mu)^2$ (where
$L$ is a bound on the smoothness and $\mu$ on the strong convexity) to a linear
dependence on $L/\mu$. Furthermore, we show how reweighting the sampling
distribution (i.e. importance sampling) is necessary in order to further
improve convergence, and obtain a linear dependence in the average smoothness,
dominating previous results. We also discuss importance sampling for SGD more
broadly and show how it can improve convergence also in other scenarios. Our
results are based on a connection we make between SGD and the randomized
Kaczmarz algorithm, which allows us to transfer ideas between the separate
bodies of literature studying each of the two methods. In particular, we recast
the randomized Kaczmarz algorithm as an instance of SGD, and apply our results
to prove its exponential convergence, but to the solution of a weighted least
squares problem rather than the original least squares problem. We then present
a modified Kaczmarz algorithm with partially biased sampling which does
converge to the original least squares solution with the same exponential
convergence rate.
| Deanna Needell, Nathan Srebro, Rachel Ward | null | 1310.5715 | null | null |
A Kernel for Hierarchical Parameter Spaces | stat.ML cs.LG | We define a family of kernels for mixed continuous/discrete hierarchical
parameter spaces and show that they are positive definite.
| Frank Hutter and Michael A. Osborne | null | 1310.5738 | null | null |
Relative Deviation Learning Bounds and Generalization with Unbounded
Loss Functions | cs.LG | We present an extensive analysis of relative deviation bounds, including
detailed proofs of two-sided inequalities and their implications. We also give
detailed proofs of two-sided generalization bounds that hold in the general
case of unbounded loss functions, under the assumption that a moment of the
loss is bounded. These bounds are useful in the analysis of importance
weighting and other learning tasks such as unbounded regression.
| Corinna Cortes, Spencer Greenberg, Mehryar Mohri | null | 1310.5796 | null | null |
Efficient Optimization for Sparse Gaussian Process Regression | cs.LG | We propose an efficient optimization algorithm for selecting a subset of
training data to induce sparsity for Gaussian process regression. The algorithm
estimates an inducing set and the hyperparameters using a single objective,
either the marginal likelihood or a variational free energy. The space and time
complexity are linear in training set size, and the algorithm can be applied to
large regression problems on discrete or continuous domains. Empirical
evaluation shows state-of-art performance in discrete cases and competitive
results in the continuous case.
| Yanshuai Cao, Marcus A. Brubaker, David J. Fleet, Aaron Hertzmann | null | 1310.6007 | null | null |
Spatial-Spectral Boosting Analysis for Stroke Patients' Motor Imagery
EEG in Rehabilitation Training | stat.ML cs.AI cs.LG | Current studies about motor imagery based rehabilitation training systems for
stroke subjects lack an appropriate analytic method, which can achieve a
considerable classification accuracy, at the same time detects gradual changes
of imagery patterns during rehabilitation process and disinters potential
mechanisms about motor function recovery. In this study, we propose an adaptive
boosting algorithm based on the cortex plasticity and spectral band shifts.
This approach models the usually predetermined spatial-spectral configurations
in EEG study into variable preconditions, and introduces a new heuristic of
stochastic gradient boost for training base learners under these preconditions.
We compare our proposed algorithm with commonly used methods on datasets
collected from 2 months' clinical experiments. The simulation results
demonstrate the effectiveness of the method in detecting the variations of
stroke patients' EEG patterns. By chronologically reorganizing the weight
parameters of the learned additive model, we verify the spatial compensatory
mechanism on impaired cortex and detect the changes of accentuation bands in
spectral domain, which may contribute important prior knowledge for
rehabilitation practice.
| Hao Zhang and Liqing Zhang | 10.3233/978-1-61499-419-0-537 | 1310.6288 | null | null |
Combining Structured and Unstructured Randomness in Large Scale PCA | cs.LG | Principal Component Analysis (PCA) is a ubiquitous tool with many
applications in machine learning including feature construction, subspace
embedding, and outlier detection. In this paper, we present an algorithm for
computing the top principal components of a dataset with a large number of rows
(examples) and columns (features). Our algorithm leverages both structured and
unstructured random projections to retain good accuracy while being
computationally efficient. We demonstrate the technique on the winning
submission the KDD 2010 Cup.
| Nikos Karampatziakis, Paul Mineiro | null | 1310.6304 | null | null |
Provable Bounds for Learning Some Deep Representations | cs.LG cs.AI stat.ML | We give algorithms with provable guarantees that learn a class of deep nets
in the generative model view popularized by Hinton and others. Our generative
model is an $n$ node multilayer neural net that has degree at most $n^{\gamma}$
for some $\gamma <1$ and each edge has a random edge weight in $[-1,1]$. Our
algorithm learns {\em almost all} networks in this class with polynomial
running time. The sample complexity is quadratic or cubic depending upon the
details of the model.
The algorithm uses layerwise learning. It is based upon a novel idea of
observing correlations among features and using these to infer the underlying
edge structure via a global graph recovery procedure. The analysis of the
algorithm reveals interesting structure of neural networks with random edge
weights.
| Sanjeev Arora and Aditya Bhaskara and Rong Ge and Tengyu Ma | null | 1310.6343 | null | null |
Randomized co-training: from cortical neurons to machine learning and
back again | cs.LG q-bio.NC stat.ML | Despite its size and complexity, the human cortex exhibits striking
anatomical regularities, suggesting there may simple meta-algorithms underlying
cortical learning and computation. We expect such meta-algorithms to be of
interest since they need to operate quickly, scalably and effectively with
little-to-no specialized assumptions.
This note focuses on a specific question: How can neurons use vast quantities
of unlabeled data to speed up learning from the comparatively rare labels
provided by reward systems? As a partial answer, we propose randomized
co-training as a biologically plausible meta-algorithm satisfying the above
requirements. As evidence, we describe a biologically-inspired algorithm,
Correlated Nystrom Views (XNV) that achieves state-of-the-art performance in
semi-supervised learning, and sketch work in progress on a neuronal
implementation.
| David Balduzzi | null | 1310.6536 | null | null |
Active Learning of Linear Embeddings for Gaussian Processes | stat.ML cs.LG | We propose an active learning method for discovering low-dimensional
structure in high-dimensional Gaussian process (GP) tasks. Such problems are
increasingly frequent and important, but have hitherto presented severe
practical difficulties. We further introduce a novel technique for
approximately marginalizing GP hyperparameters, yielding marginal predictions
robust to hyperparameter mis-specification. Our method offers an efficient
means of performing GP regression, quadrature, or Bayesian optimization in
high-dimensional spaces.
| Roman Garnett and Michael A. Osborne and Philipp Hennig | null | 1310.6740 | null | null |
Durkheim Project Data Analysis Report | cs.AI cs.CL cs.LG | This report describes the suicidality prediction models created under the
DARPA DCAPS program in association with the Durkheim Project
[http://durkheimproject.org/]. The models were built primarily from
unstructured text (free-format clinician notes) for several hundred patient
records obtained from the Veterans Health Administration (VHA). The models were
constructed using a genetic programming algorithm applied to bag-of-words and
bag-of-phrases datasets. The influence of additional structured data was
explored but was found to be minor. Given the small dataset size,
classification between cohorts was high fidelity (98%). Cross-validation
suggests these models are reasonably predictive, with an accuracy of 50% to 69%
on five rotating folds, with ensemble averages of 58% to 67%. One particularly
noteworthy result is that word-pairs can dramatically improve classification
accuracy; but this is the case only when one of the words in the pair is
already known to have a high predictive value. By contrast, the set of all
possible word-pairs does not improve on a simple bag-of-words model.
| Linas Vepstas | 10.1371/journal.pone.0085733.s001 | 1310.6775 | null | null |
Predicting the NFL using Twitter | cs.SI cs.LG physics.soc-ph stat.ML | We study the relationship between social media output and National Football
League (NFL) games, using a dataset containing messages from Twitter and NFL
game statistics. Specifically, we consider tweets pertaining to specific teams
and games in the NFL season and use them alongside statistical game data to
build predictive models for future game outcomes (which team will win?) and
sports betting outcomes (which team will win with the point spread? will the
total points be over/under the line?). We experiment with several feature sets
and find that simple features using large volumes of tweets can match or exceed
the performance of more traditional features that use game statistics.
| Shiladitya Sinha, Chris Dyer, Kevin Gimpel, and Noah A. Smith | null | 1310.6998 | null | null |
Scaling SVM and Least Absolute Deviations via Exact Data Reduction | cs.LG stat.ML | The support vector machine (SVM) is a widely used method for classification.
Although many efforts have been devoted to develop efficient solvers, it
remains challenging to apply SVM to large-scale problems. A nice property of
SVM is that the non-support vectors have no effect on the resulting classifier.
Motivated by this observation, we present fast and efficient screening rules to
discard non-support vectors by analyzing the dual problem of SVM via
variational inequalities (DVI). As a result, the number of data instances to be
entered into the optimization can be substantially reduced. Some appealing
features of our screening method are: (1) DVI is safe in the sense that the
vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data
set needs to be scanned only once to run the screening, whose computational
cost is negligible compared to that of solving the SVM problem; (3) DVI is
independent of the solvers and can be integrated with any existing efficient
solvers. We also show that the DVI technique can be extended to detect
non-support vectors in the least absolute deviations regression (LAD). To the
best of our knowledge, there are currently no screening methods for LAD. We
have evaluated DVI on both synthetic and real data sets. Experiments indicate
that DVI significantly outperforms the existing state-of-the-art screening
rules for SVM, and is very effective in discarding non-support vectors for LAD.
The speedup gained by DVI rules can be up to two orders of magnitude.
| Jie Wang and Peter Wonka and Jieping Ye | null | 1310.7048 | null | null |
Generalized Thompson Sampling for Contextual Bandits | cs.LG cs.AI stat.ML stat.OT | Thompson Sampling, one of the oldest heuristics for solving multi-armed
bandits, has recently been shown to demonstrate state-of-the-art performance.
The empirical success has led to great interests in theoretical understanding
of this heuristic. In this paper, we approach this problem in a way very
different from existing efforts. In particular, motivated by the connection
between Thompson Sampling and exponentiated updates, we propose a new family of
algorithms called Generalized Thompson Sampling in the expert-learning
framework, which includes Thompson Sampling as a special case. Similar to most
expert-learning algorithms, Generalized Thompson Sampling uses a loss function
to adjust the experts' weights. General regret bounds are derived, which are
also instantiated to two important loss functions: square loss and logarithmic
loss. In contrast to existing bounds, our results apply to quite general
contextual bandits. More importantly, they quantify the effect of the "prior"
distribution on the regret bounds.
| Lihong Li | null | 1310.7163 | null | null |
Relax but stay in control: from value to algorithms for online Markov
decision processes | cs.LG math.OC stat.ML | Online learning algorithms are designed to perform in non-stationary
environments, but generally there is no notion of a dynamic state to model
constraints on current and future actions as a function of past actions.
State-based models are common in stochastic control settings, but commonly used
frameworks such as Markov Decision Processes (MDPs) assume a known stationary
environment. In recent years, there has been a growing interest in combining
the above two frameworks and considering an MDP setting in which the cost
function is allowed to change arbitrarily after each time step. However, most
of the work in this area has been algorithmic: given a problem, one would
develop an algorithm almost from scratch. Moreover, the presence of the state
and the assumption of an arbitrarily varying environment complicate both the
theoretical analysis and the development of computationally efficient methods.
This paper describes a broad extension of the ideas proposed by Rakhlin et al.
to give a general framework for deriving algorithms in an MDP setting with
arbitrarily changing costs. This framework leads to a unifying view of existing
methods and provides a general procedure for constructing new ones. Several new
methods are presented, and one of them is shown to have important advantages
over a similar method developed from scratch via an online version of
approximate dynamic programming.
| Peng Guan, Maxim Raginsky, Rebecca Willett | null | 1310.7300 | null | null |
Successive Nonnegative Projection Algorithm for Robust Nonnegative Blind
Source Separation | stat.ML cs.LG math.NA math.OC | In this paper, we propose a new fast and robust recursive algorithm for
near-separable nonnegative matrix factorization, a particular nonnegative blind
source separation problem. This algorithm, which we refer to as the successive
nonnegative projection algorithm (SNPA), is closely related to the popular
successive projection algorithm (SPA), but takes advantage of the nonnegativity
constraint in the decomposition. We prove that SNPA is more robust than SPA and
can be applied to a broader class of nonnegative matrices. This is illustrated
on some synthetic data sets, and on a real-world hyperspectral image.
| Nicolas Gillis | 10.1137/130946782 | 1310.7529 | null | null |
The Information Geometry of Mirror Descent | stat.ML cs.LG | Information geometry applies concepts in differential geometry to probability
and statistics and is especially useful for parameter estimation in exponential
families where parameters are known to lie on a Riemannian manifold.
Connections between the geometric properties of the induced manifold and
statistical properties of the estimation problem are well-established. However
developing first-order methods that scale to larger problems has been less of a
focus in the information geometry community. The best known algorithm that
incorporates manifold structure is the second-order natural gradient descent
algorithm introduced by Amari. On the other hand, stochastic approximation
methods have led to the development of first-order methods for optimizing noisy
objective functions. A recent generalization of the Robbins-Monro algorithm
known as mirror descent, developed by Nemirovski and Yudin is a first order
method that induces non-Euclidean geometries. However current analysis of
mirror descent does not precisely characterize the induced non-Euclidean
geometry nor does it consider performance in terms of statistical relative
efficiency. In this paper, we prove that mirror descent induced by Bregman
divergences is equivalent to the natural gradient descent algorithm on the dual
Riemannian manifold. Using this equivalence, it follows that (1) mirror descent
is the steepest descent direction along the Riemannian manifold of the
exponential family; (2) mirror descent with log-likelihood loss applied to
parameter estimation in exponential families asymptotically achieves the
classical Cram\'er-Rao lower bound and (3) natural gradient descent for
manifolds corresponding to exponential families can be implemented as a
first-order method through mirror descent.
| Garvesh Raskutti and Sayan Mukherjee | null | 1310.7780 | null | null |
An Unsupervised Feature Learning Approach to Improve Automatic Incident
Detection | cs.LG | Sophisticated automatic incident detection (AID) technology plays a key role
in contemporary transportation systems. Though many papers were devoted to
study incident classification algorithms, few study investigated how to enhance
feature representation of incidents to improve AID performance. In this paper,
we propose to use an unsupervised feature learning algorithm to generate higher
level features to represent incidents. We used real incident data in the
experiments and found that effective feature mapping function can be learnt
from the data crosses the test sites. With the enhanced features, detection
rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are
significantly improved in all of the three representative cases. This approach
also provides an alternative way to reduce the amount of labeled data, which is
expensive to obtain, required in training better incident classifiers since the
feature learning is unsupervised.
| Jimmy SJ. Ren, Wei Wang, Jiawei Wang, Stephen Liao | 10.1109/ITSC.2012.6338621 | 1310.7795 | null | null |
Automatic Classification of Variable Stars in Catalogs with missing data | astro-ph.IM cs.LG stat.ML | We present an automatic classification method for astronomical catalogs with
missing data. We use Bayesian networks, a probabilistic graphical model, that
allows us to perform inference to pre- dict missing values given observed data
and dependency relationships between variables. To learn a Bayesian network
from incomplete data, we use an iterative algorithm that utilises sampling
methods and expectation maximization to estimate the distributions and
probabilistic dependencies of variables from data with missing values. To test
our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and
one complete catalog (MACHO). We examine how classification accuracy changes
when information from missing data catalogs is included, how our method
compares to traditional missing data approaches and at what computational cost.
Integrating these catalogs with missing data we find that classification of
variable objects improves by few percent and by 15% for quasar detection while
keeping the computational cost the same.
| Karim Pichara and Pavlos Protopapas | 10.1088/0004-637X/777/2/83 | 1310.7868 | null | null |
Learning Sparsely Used Overcomplete Dictionaries via Alternating
Minimization | cs.LG math.OC stat.ML | We consider the problem of sparse coding, where each sample consists of a
sparse linear combination of a set of dictionary atoms, and the task is to
learn both the dictionary elements and the mixing coefficients. Alternating
minimization is a popular heuristic for sparse coding, where the dictionary and
the coefficients are estimated in alternate steps, keeping the other fixed.
Typically, the coefficients are estimated via $\ell_1$ minimization, keeping
the dictionary fixed, and the dictionary is estimated through least squares,
keeping the coefficients fixed. In this paper, we establish local linear
convergence for this variant of alternating minimization and establish that the
basin of attraction for the global optimum (corresponding to the true
dictionary and the coefficients) is $\order{1/s^2}$, where $s$ is the sparsity
level in each sample and the dictionary satisfies RIP. Combined with the recent
results of approximate dictionary estimation, this yields provable guarantees
for exact recovery of both the dictionary elements and the coefficients, when
the dictionary elements are incoherent.
| Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth
Netrapalli | null | 1310.7991 | null | null |
Necessary and Sufficient Conditions for Novel Word Detection in
Separable Topic Models | cs.LG cs.IR stat.ML | The simplicial condition and other stronger conditions that imply it have
recently played a central role in developing polynomial time algorithms with
provable asymptotic consistency and sample complexity guarantees for topic
estimation in separable topic models. Of these algorithms, those that rely
solely on the simplicial condition are impractical while the practical ones
need stronger conditions. In this paper, we demonstrate, for the first time,
that the simplicial condition is a fundamental, algorithm-independent,
information-theoretic necessary condition for consistent separable topic
estimation. Furthermore, under solely the simplicial condition, we present a
practical quadratic-complexity algorithm based on random projections which
consistently detects all novel words of all topics using only up to
second-order empirical word moments. This algorithm is amenable to distributed
implementation making it attractive for 'big-data' scenarios involving a
network of large distributed databases.
| Weicong Ding, Prakash Ishwar, Mohammad H. Rohban, Venkatesh Saligrama | null | 1310.7994 | null | null |
Online Ensemble Learning for Imbalanced Data Streams | cs.LG stat.ML | While both cost-sensitive learning and online learning have been studied
extensively, the effort in simultaneously dealing with these two issues is
limited. Aiming at this challenge task, a novel learning framework is proposed
in this paper. The key idea is based on the fusion of online ensemble
algorithms and the state of the art batch mode cost-sensitive bagging/boosting
algorithms. Within this framework, two separately developed research areas are
bridged together, and a batch of theoretically sound online cost-sensitive
bagging and online cost-sensitive boosting algorithms are first proposed.
Unlike other online cost-sensitive learning algorithms lacking theoretical
analysis of asymptotic properties, the convergence of the proposed algorithms
is guaranteed under certain conditions, and the experimental evidence with
benchmark data sets also validates the effectiveness and efficiency of the
proposed methods.
| Boyu Wang, Joelle Pineau | null | 1310.8004 | null | null |
Para-active learning | cs.LG stat.ML | Training examples are not all equally informative. Active learning strategies
leverage this observation in order to massively reduce the number of examples
that need to be labeled. We leverage the same observation to build a generic
strategy for parallelizing learning algorithms. This strategy is effective
because the search for informative examples is highly parallelizable and
because we show that its performance does not deteriorate when the sifting
process relies on a slightly outdated model. Parallel active learning is
particularly attractive to train nonlinear models with non-linear
representations because there are few practical parallel learning algorithms
for such models. We report preliminary experiments using both kernel SVMs and
SGD-trained neural networks.
| Alekh Agarwal, Leon Bottou, Miroslav Dudik, John Langford | null | 1310.8243 | null | null |
Safe and Efficient Screening For Sparse Support Vector Machine | cs.LG stat.ML | Screening is an effective technique for speeding up the training process of a
sparse learning model by removing the features that are guaranteed to be
inactive the process. In this paper, we present a efficient screening technique
for sparse support vector machine based on variational inequality. The
technique is both efficient and safe.
| Zheng Zhao, Jun Liu | null | 1310.8320 | null | null |
An efficient distributed learning algorithm based on effective local
functional approximations | cs.LG | Scalable machine learning over big data is an important problem that is
receiving a lot of attention in recent years. On popular distributed
environments such as Hadoop running on a cluster of commodity machines,
communication costs are substantial and algorithms need to be designed suitably
considering those costs. In this paper we give a novel approach to the
distributed training of linear classifiers (involving smooth losses and L2
regularization) that is designed to reduce the total communication costs. At
each iteration, the nodes minimize locally formed approximate objective
functions; then the resulting minimizers are combined to form a descent
direction to move. Our approach gives a lot of freedom in the formation of the
approximate objective function as well as in the choice of methods to solve
them. The method is shown to have $O(log(1/\epsilon))$ time convergence. The
method can be viewed as an iterative parameter mixing method. A special
instantiation yields a parallel stochastic gradient descent method with strong
convergence. When communication times between nodes are large, our method is
much faster than the Terascale method (Agarwal et al., 2011), which is a state
of the art distributed solver based on the statistical query model (Chuet al.,
2006) that computes function and gradient values in a distributed fashion. We
also evaluate against other recent distributed methods and demonstrate superior
performance of our method.
| Dhruv Mahajan, Nikunj Agrawal, S. Sathiya Keerthi, S. Sundararajan,
Leon Bottou | null | 1310.8418 | null | null |
Multilabel Classification through Random Graph Ensembles | cs.LG | We present new methods for multilabel classification, relying on ensemble
learning on a collection of random output graphs imposed on the multilabel and
a kernel-based structured output learner as the base classifier. For ensemble
learning, differences among the output graphs provide the required base
classifier diversity and lead to improved performance in the increasing size of
the ensemble. We study different methods of forming the ensemble prediction,
including majority voting and two methods that perform inferences over the
graph structures before or after combining the base models into the ensemble.
We compare the methods against the state-of-the-art machine learning approaches
on a set of heterogeneous multilabel benchmark problems, including multilabel
AdaBoost, convex multitask feature learning, as well as single target learning
approaches represented by Bagging and SVM. In our experiments, the random graph
ensembles are very competitive and robust, ranking first or second on most of
the datasets. Overall, our results show that random graph ensembles are viable
alternatives to flat multilabel and multitask learners.
| Hongyu Su, Juho Rousu | null | 1310.8428 | null | null |
Reinforcement Learning Framework for Opportunistic Routing in WSNs | cs.NI cs.LG | Routing packets opportunistically is an essential part of multihop ad hoc
wireless sensor networks. The existing routing techniques are not adaptive
opportunistic. In this paper we have proposed an adaptive opportunistic routing
scheme that routes packets opportunistically in order to ensure that packet
loss is avoided. Learning and routing are combined in the framework that
explores the optimal routing possibilities. In this paper we implemented this
Reinforced learning framework using a customer simulator. The experimental
results revealed that the scheme is able to exploit the opportunistic to
optimize routing of packets even though the network structure is unknown.
| G.Srinivas Rao, A.V.Ramana | null | 1310.8467 | null | null |
Deep AutoRegressive Networks | cs.LG stat.ML | We introduce a deep, generative autoencoder capable of learning hierarchies
of distributed representations from data. Successive deep stochastic hidden
layers are equipped with autoregressive connections, which enable the model to
be sampled from quickly and exactly via ancestral sampling. We derive an
efficient approximate parameter estimation method based on the minimum
description length (MDL) principle, which can be seen as maximising a
variational lower bound on the log-likelihood, with a feedforward neural
network implementing approximate inference. We demonstrate state-of-the-art
generative performance on a number of classic data sets: several UCI data sets,
MNIST and Atari 2600 games.
| Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, Daan
Wierstra | null | 1310.8499 | null | null |
A systematic comparison of supervised classifiers | cs.LG | Pattern recognition techniques have been employed in a myriad of industrial,
medical, commercial and academic applications. To tackle such a diversity of
data, many techniques have been devised. However, despite the long tradition of
pattern recognition research, there is no technique that yields the best
classification in all scenarios. Therefore, the consideration of as many as
possible techniques presents itself as an fundamental practice in applications
aiming at high accuracy. Typical works comparing methods either emphasize the
performance of a given algorithm in validation tests or systematically compare
various algorithms, assuming that the practical use of these methods is done by
experts. In many occasions, however, researchers have to deal with their
practical classification tasks without an in-depth knowledge about the
underlying mechanisms behind parameters. Actually, the adequate choice of
classifiers and parameters alike in such practical circumstances constitutes a
long-standing problem and is the subject of the current paper. We carried out a
study on the performance of nine well-known classifiers implemented by the Weka
framework and compared the dependence of the accuracy with their configuration
parameter configurations. The analysis of performance with default parameters
revealed that the k-nearest neighbors method exceeds by a large margin the
other methods when high dimensional datasets are considered. When other
configuration of parameters were allowed, we found that it is possible to
improve the quality of SVM in more than 20% even if parameters are set
randomly. Taken together, the investigation conducted in this paper suggests
that, apart from the SVM implementation, Weka's default configuration of
parameters provides an performance close the one achieved with the optimal
configuration.
| D. R. Amancio, C. H. Comin, D. Casanova, G. Travieso, O. M. Bruno, F.
A. Rodrigues and L. da F. Costa | 10.1371/journal.pone.0094137 | 1311.0202 | null | null |
Online Learning with Multiple Operator-valued Kernels | cs.LG stat.ML | We consider the problem of learning a vector-valued function f in an online
learning setting. The function f is assumed to lie in a reproducing Hilbert
space of operator-valued kernels. We describe two online algorithms for
learning f while taking into account the output structure. A first contribution
is an algorithm, ONORMA, that extends the standard kernel-based online learning
algorithm NORMA from scalar-valued to operator-valued setting. We report a
cumulative error bound that holds both for classification and regression. We
then define a second algorithm, MONORMA, which addresses the limitation of
pre-defining the output structure in ONORMA by learning sequentially a linear
combination of operator-valued kernels. Our experiments show that the proposed
algorithms achieve good performance results with low computational cost.
| Julien Audiffren (LIF), Hachem Kadri (LIF) | null | 1311.0222 | null | null |
Nearly Optimal Sample Size in Hypothesis Testing for High-Dimensional
Regression | math.ST cs.IT cs.LG math.IT stat.ME stat.TH | We consider the problem of fitting the parameters of a high-dimensional
linear regression model. In the regime where the number of parameters $p$ is
comparable to or exceeds the sample size $n$, a successful approach uses an
$\ell_1$-penalized least squares estimator, known as Lasso. Unfortunately,
unlike for linear estimators (e.g., ordinary least squares), no
well-established method exists to compute confidence intervals or p-values on
the basis of the Lasso estimator. Very recently, a line of work
\cite{javanmard2013hypothesis, confidenceJM, GBR-hypothesis} has addressed this
problem by constructing a debiased version of the Lasso estimator. In this
paper, we study this approach for random design model, under the assumption
that a good estimator exists for the precision matrix of the design. Our
analysis improves over the state of the art in that it establishes nearly
optimal \emph{average} testing power if the sample size $n$ asymptotically
dominates $s_0 (\log p)^2$, with $s_0$ being the sparsity level (number of
non-zero coefficients). Earlier work obtains provable guarantees only for much
larger sample size, namely it requires $n$ to asymptotically dominate $(s_0
\log p)^2$.
In particular, for random designs with a sparse precision matrix we show that
an estimator thereof having the required properties can be computed
efficiently. Finally, we evaluate this approach on synthetic data and compare
it with earlier proposals.
| Adel Javanmard and Andrea Montanari | null | 1311.0274 | null | null |
Thompson Sampling for Complex Bandit Problems | stat.ML cs.LG | We consider stochastic multi-armed bandit problems with complex actions over
a set of basic arms, where the decision maker plays a complex action rather
than a basic arm in each round. The reward of the complex action is some
function of the basic arms' rewards, and the feedback observed may not
necessarily be the reward per-arm. For instance, when the complex actions are
subsets of the arms, we may only observe the maximum reward over the chosen
subset. Thus, feedback across complex actions may be coupled due to the nature
of the reward function. We prove a frequentist regret bound for Thompson
sampling in a very general setting involving parameter, action and observation
spaces and a likelihood function over them. The bound holds for
discretely-supported priors over the parameter space and without additional
structural properties such as closed-form posteriors, conjugate prior structure
or independence across arms. The regret bound scales logarithmically with time
but, more importantly, with an improved constant that non-trivially captures
the coupling across complex actions due to the structure of the rewards. As
applications, we derive improved regret bounds for classes of complex bandit
problems involving selecting subsets of arms, including the first nontrivial
regret bounds for nonlinear MAX reward feedback from subsets.
| Aditya Gopalan, Shie Mannor and Yishay Mansour | null | 1311.0466 | null | null |
Thompson Sampling for Online Learning with Linear Experts | stat.ML cs.LG | In this note, we present a version of the Thompson sampling algorithm for the
problem of online linear generalization with full information (i.e., the
experts setting), studied by Kalai and Vempala, 2005. The algorithm uses a
Gaussian prior and time-varying Gaussian likelihoods, and we show that it
essentially reduces to Kalai and Vempala's Follow-the-Perturbed-Leader
strategy, with exponentially distributed noise replaced by Gaussian noise. This
implies sqrt(T) regret bounds for Thompson sampling (with time-varying
likelihood) for online learning with full information.
| Aditya Gopalan | null | 1311.0468 | null | null |
A Parallel SGD method with Strong Convergence | cs.LG cs.DC | This paper proposes a novel parallel stochastic gradient descent (SGD) method
that is obtained by applying parallel sets of SGD iterations (each set
operating on one node using the data residing in it) for finding the direction
in each iteration of a batch descent method. The method has strong convergence
properties. Experiments on datasets with high dimensional feature spaces show
the value of this method.
| Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan, Leon Bottou | null | 1311.0636 | null | null |
On Fast Dropout and its Applicability to Recurrent Networks | stat.ML cs.LG cs.NE | Recurrent Neural Networks (RNNs) are rich models for the processing of
sequential data. Recent work on advancing the state of the art has been focused
on the optimization or modelling of RNNs, mostly motivated by adressing the
problems of the vanishing and exploding gradients. The control of overfitting
has seen considerably less attention. This paper contributes to that by
analyzing fast dropout, a recent regularization method for generalized linear
models and neural networks from a back-propagation inspired perspective. We
show that fast dropout implements a quadratic form of an adaptive,
per-parameter regularizer, which rewards large weights in the light of
underfitting, penalizes them for overconfident predictions and vanishes at
minima of an unregularized training loss. The derivatives of that regularizer
are exclusively based on the training error signal. One consequence of this is
the absense of a global weight attractor, which is particularly appealing for
RNNs, since the dynamics are not biased towards a certain regime. We positively
test the hypothesis that this improves the performance of RNNs on four musical
data sets.
| Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen,
Sebastian Urban, Patrick van der Smagt | null | 1311.0701 | null | null |
Generative Modelling for Unsupervised Score Calibration | stat.ML cs.LG | Score calibration enables automatic speaker recognizers to make
cost-effective accept / reject decisions. Traditional calibration requires
supervised data, which is an expensive resource. We propose a 2-component GMM
for unsupervised calibration and demonstrate good performance relative to a
supervised baseline on NIST SRE'10 and SRE'12. A Bayesian analysis demonstrates
that the uncertainty associated with the unsupervised calibration parameter
estimates is surprisingly small.
| Niko Br\"ummer and Daniel Garcia-Romero | null | 1311.0707 | null | null |
Distributed Exploration in Multi-Armed Bandits | cs.LG | We study exploration in Multi-Armed Bandits in a setting where $k$ players
collaborate in order to identify an $\epsilon$-optimal arm. Our motivation
comes from recent employment of bandit algorithms in computationally intensive,
large-scale applications. Our results demonstrate a non-trivial tradeoff
between the number of arm pulls required by each of the players, and the amount
of communication between them. In particular, our main result shows that by
allowing the $k$ players to communicate only once, they are able to learn
$\sqrt{k}$ times faster than a single player. That is, distributing learning to
$k$ players gives rise to a factor $\sqrt{k}$ parallel speed-up. We complement
this result with a lower bound showing this is in general the best possible. On
the other extreme, we present an algorithm that achieves the ideal factor $k$
speed-up in learning performance, with communication only logarithmic in
$1/\epsilon$.
| Eshcar Hillel, Zohar Karnin, Tomer Koren, Ronny Lempel, Oren Somekh | null | 1311.0800 | null | null |
A Divide-and-Conquer Solver for Kernel Support Vector Machines | cs.LG | The kernel support vector machine (SVM) is one of the most widely used
classification methods; however, the amount of computation required becomes the
bottleneck when facing millions of samples. In this paper, we propose and
analyze a novel divide-and-conquer solver for kernel SVMs (DC-SVM). In the
division step, we partition the kernel SVM problem into smaller subproblems by
clustering the data, so that each subproblem can be solved independently and
efficiently. We show theoretically that the support vectors identified by the
subproblem solution are likely to be support vectors of the entire kernel SVM
problem, provided that the problem is partitioned appropriately by kernel
clustering. In the conquer step, the local solutions from the subproblems are
used to initialize a global coordinate descent solver, which converges quickly
as suggested by our analysis. By extending this idea, we develop a multilevel
Divide-and-Conquer SVM algorithm with adaptive clustering and early prediction
strategy, which outperforms state-of-the-art methods in terms of training
speed, testing accuracy, and memory usage. As an example, on the covtype
dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in
obtaining the exact SVM solution (to within $10^{-6}$ relative error) which
achieves 96.15% prediction accuracy. Moreover, with our proposed early
prediction strategy, DC-SVM achieves about 96% accuracy in only 12 minutes,
which is more than 100 times faster than LIBSVM.
| Cho-Jui Hsieh and Si Si and Inderjit S. Dhillon | null | 1311.0914 | null | null |
Large Margin Distribution Machine | cs.LG | Support vector machine (SVM) has been one of the most popular learning
algorithms, with the central idea of maximizing the minimum margin, i.e., the
smallest distance from the instances to the classification boundary. Recent
theoretical results, however, disclosed that maximizing the minimum margin does
not necessarily lead to better generalization performances, and instead, the
margin distribution has been proven to be more crucial. In this paper, we
propose the Large margin Distribution Machine (LDM), which tries to achieve a
better generalization performance by optimizing the margin distribution. We
characterize the margin distribution by the first- and second-order statistics,
i.e., the margin mean and variance. The LDM is a general learning approach
which can be used in any place where SVM can be applied, and its superiority is
verified both theoretically and empirically in this paper.
| Teng Zhang, Zhi-Hua Zhou | null | 1311.0989 | null | null |
Combined Independent Component Analysis and Canonical Polyadic
Decomposition via Joint Diagonalization | stat.ML cs.LG | Recently, there has been a trend to combine independent component analysis
and canonical polyadic decomposition (ICA-CPD) for an enhanced robustness for
the computation of CPD, and ICA-CPD could be further converted into CPD of a
5th-order partially symmetric tensor, by calculating the eigenmatrices of the
4th-order cumulant slices of a trilinear mixture. In this study, we propose a
new 5th-order CPD algorithm constrained with partial symmetry based on joint
diagonalization. As the main steps involved in the proposed algorithm undergo
no updating iterations for the loading matrices, it is much faster than the
existing algorithm based on alternating least squares and enhanced line search,
with competent performances. Simulation results are provided to demonstrate the
performance of the proposed algorithm.
| Xiao-Feng Gong, Cheng-Yuan Wang, Ya-Na Hao, and Qiu-Hua Lin | null | 1311.1040 | null | null |
Statistical Inference in Hidden Markov Models using $k$-segment
Constraints | stat.ME cs.LG stat.ML | Hidden Markov models (HMMs) are one of the most widely used statistical
methods for analyzing sequence data. However, the reporting of output from HMMs
has largely been restricted to the presentation of the most-probable (MAP)
hidden state sequence, found via the Viterbi algorithm, or the sequence of most
probable marginals using the forward-backward (F-B) algorithm. In this article,
we expand the amount of information we could obtain from the posterior
distribution of an HMM by introducing linear-time dynamic programming
algorithms that, we collectively call $k$-segment algorithms, that allow us to
i) find MAP sequences, ii) compute posterior probabilities and iii) simulate
sample paths conditional on a user specified number of segments, i.e.
contiguous runs in a hidden state, possibly of a particular type. We illustrate
the utility of these methods using simulated and real examples and highlight
the application of prospective and retrospective use of these methods for
fitting HMMs or exploring existing model fits.
| Michalis K. Titsias, Christopher Yau, Christopher C. Holmes | 10.1080/01621459.2014.998762 | 1311.1189 | null | null |
How to Center Binary Deep Boltzmann Machines | stat.ML cs.LG | This work analyzes centered binary Restricted Boltzmann Machines (RBMs) and
binary Deep Boltzmann Machines (DBMs), where centering is done by subtracting
offset values from visible and hidden variables. We show analytically that (i)
centering results in a different but equivalent parameterization for artificial
neural networks in general, (ii) the expected performance of centered binary
RBMs/DBMs is invariant under simultaneous flip of data and offsets, for any
offset value in the range of zero to one, (iii) centering can be reformulated
as a different update rule for normal binary RBMs/DBMs, and (iv) using the
enhanced gradient is equivalent to setting the offset values to the average
over model and data mean. Furthermore, numerical simulations suggest that (i)
optimal generative performance is achieved by subtracting mean values from
visible as well as hidden variables, (ii) centered RBMs/DBMs reach
significantly higher log-likelihood values than normal binary RBMs/DBMs, (iii)
centering variants whose offsets depend on the model mean, like the enhanced
gradient, suffer from severe divergence problems, (iv) learning is stabilized
if an exponentially moving average over the batch means is used for the offset
values instead of the current batch mean, which also prevents the enhanced
gradient from diverging, (v) centered RBMs/DBMs reach higher LL values than
normal RBMs/DBMs while having a smaller norm of the weight matrix, (vi)
centering leads to an update direction that is closer to the natural gradient
and that the natural gradient is extremly efficient for training RBMs, (vii)
centering dispense the need for greedy layer-wise pre-training of DBMs, (viii)
furthermore we show that pre-training often even worsen the results
independently whether centering is used or not, and (ix) centering is also
beneficial for auto encoders.
| Jan Melchior, Asja Fischer, Laurenz Wiskott | null | 1311.1354 | null | null |
TOP-SPIN: TOPic discovery via Sparse Principal component INterference | cs.CV cs.IR cs.LG | We propose a novel topic discovery algorithm for unlabeled images based on
the bag-of-words (BoW) framework. We first extract a dictionary of visual words
and subsequently for each image compute a visual word occurrence histogram. We
view these histograms as rows of a large matrix from which we extract sparse
principal components (PCs). Each PC identifies a sparse combination of visual
words which co-occur frequently in some images but seldom appear in others.
Each sparse PC corresponds to a topic, and images whose interference with the
PC is high belong to that topic, revealing the common parts possessed by the
images. We propose to solve the associated sparse PCA problems using an
Alternating Maximization (AM) method, which we modify for purpose of
efficiently extracting multiple PCs in a deflation scheme. Our approach attacks
the maximization problem in sparse PCA directly and is scalable to
high-dimensional data. Experiments on automatic topic discovery and category
prediction demonstrate encouraging performance of our approach.
| Martin Tak\'a\v{c}, Selin Damla Ahipa\c{s}ao\u{g}lu, Ngai-Man Cheung,
Peter Richt\'arik | null | 1311.1406 | null | null |
Structural Learning for Template-free Protein Folding | cs.LG cs.CE q-bio.QM | The thesis is aimed to solve the template-free protein folding problem by
tackling two important components: efficient sampling in vast conformation
space, and design of knowledge-based potentials with high accuracy. We have
proposed the first-order and second-order CRF-Sampler to sample structures from
the continuous local dihedral angles space by modeling the lower and higher
order conditional dependency between neighboring dihedral angles given the
primary sequence information. A framework combining the Conditional Random
Fields and the energy function is introduced to guide the local conformation
sampling using long range constraints with the energy function.
The relationship between the sequence profile and the local dihedral angle
distribution is nonlinear. Hence we proposed the CNF-Folder to model this
complex relationship by applying a novel machine learning model Conditional
Neural Fields which utilizes the structural graphical model with the neural
network. CRF-Samplers and CNF-Folder perform very well in CASP8 and CASP9.
Further, a novel pairwise distance statistical potential (EPAD) is designed
to capture the dependency of the energy profile on the positions of the
interacting amino acids as well as the types of those amino acids, opposing the
common assumption that this energy profile depends only on the types of amino
acids. EPAD has also been successfully applied in the CASP 10 Free Modeling
experiment with CNF-Folder, especially outstanding on some uncommon structured
targets.
| Feng Zhao | null | 1311.1422 | null | null |
Category-Theoretic Quantitative Compositional Distributional Models of
Natural Language Semantics | cs.CL cs.LG math.CT math.LO | This thesis is about the problem of compositionality in distributional
semantics. Distributional semantics presupposes that the meanings of words are
a function of their occurrences in textual contexts. It models words as
distributions over these contexts and represents them as vectors in high
dimensional spaces. The problem of compositionality for such models concerns
itself with how to produce representations for larger units of text by
composing the representations of smaller units of text.
This thesis focuses on a particular approach to this compositionality
problem, namely using the categorical framework developed by Coecke, Sadrzadeh,
and Clark, which combines syntactic analysis formalisms with distributional
semantic representations of meaning to produce syntactically motivated
composition operations. This thesis shows how this approach can be
theoretically extended and practically implemented to produce concrete
compositional distributional models of natural language semantics. It
furthermore demonstrates that such models can perform on par with, or better
than, other competing approaches in the field of natural language processing.
There are three principal contributions to computational linguistics in this
thesis. The first is to extend the DisCoCat framework on the syntactic front
and semantic front, incorporating a number of syntactic analysis formalisms and
providing learning procedures allowing for the generation of concrete
compositional distributional models. The second contribution is to evaluate the
models developed from the procedures presented here, showing that they
outperform other compositional distributional models present in the literature.
The third contribution is to show how using category theory to solve linguistic
problems forms a sound basis for research, illustrated by examples of work on
this topic, that also suggest directions for future research.
| Edward Grefenstette | null | 1311.1539 | null | null |
The Maximum Entropy Relaxation Path | cs.LG math.OC stat.ML | The relaxed maximum entropy problem is concerned with finding a probability
distribution on a finite set that minimizes the relative entropy to a given
prior distribution, while satisfying relaxed max-norm constraints with respect
to a third observed multinomial distribution. We study the entire relaxation
path for this problem in detail. We show existence and a geometric description
of the relaxation path. Specifically, we show that the maximum entropy
relaxation path admits a planar geometric description as an increasing,
piecewise linear function in the inverse relaxation parameter. We derive fast
algorithms for tracking the path. In various realistic settings, our algorithms
require $O(n\log(n))$ operations for probability distributions on $n$ points,
making it possible to handle large problems. Once the path has been recovered,
we show that given a validation set, the family of admissible models is reduced
from an infinite family to a small, discrete set. We demonstrate the merits of
our approach in experiments with synthetic data and discuss its potential for
the estimation of compact n-gram language models.
| Moshe Dubiner, Matan Gavish and Yoram Singer | null | 1311.1644 | null | null |
Scalable Recommendation with Poisson Factorization | cs.IR cs.AI cs.LG stat.ML | We develop a Bayesian Poisson matrix factorization model for forming
recommendations from sparse user behavior data. These data are large user/item
matrices where each user has provided feedback on only a small subset of items,
either explicitly (e.g., through star ratings) or implicitly (e.g., through
views or purchases). In contrast to traditional matrix factorization
approaches, Poisson factorization implicitly models each user's limited
attention to consume items. Moreover, because of the mathematical form of the
Poisson likelihood, the model needs only to explicitly consider the observed
entries in the matrix, leading to both scalable computation and good predictive
performance. We develop a variational inference algorithm for approximate
posterior inference that scales up to massive data sets. This is an efficient
algorithm that iterates over the observed entries and adjusts an approximate
posterior over the user/item representations. We apply our method to large
real-world user data containing users rating movies, users listening to songs,
and users reading scientific papers. In all these settings, Bayesian Poisson
factorization outperforms state-of-the-art matrix factorization methods.
| Prem Gopalan, Jake M. Hofman, David M. Blei | null | 1311.1704 | null | null |
Stochastic blockmodel approximation of a graphon: Theory and consistent
estimation | stat.ME cs.LG cs.SI physics.data-an stat.ML | Non-parametric approaches for analyzing network data based on exchangeable
graph models (ExGM) have recently gained interest. The key object that defines
an ExGM is often referred to as a graphon. This non-parametric perspective on
network modeling poses challenging questions on how to make inference on the
graphon underlying observed network data. In this paper, we propose a
computationally efficient procedure to estimate a graphon from a set of
observed networks generated from it. This procedure is based on a stochastic
blockmodel approximation (SBA) of the graphon. We show that, by approximating
the graphon with a stochastic block model, the graphon can be consistently
estimated, that is, the estimation error vanishes as the size of the graph
approaches infinity.
| Edoardo M Airoldi, Thiago B Costa, Stanley H Chan | null | 1311.1731 | null | null |
Exploring Deep and Recurrent Architectures for Optimal Control | cs.LG cs.AI cs.NE cs.RO cs.SY | Sophisticated multilayer neural networks have achieved state of the art
results on multiple supervised tasks. However, successful applications of such
multilayer networks to control have so far been limited largely to the
perception portion of the control pipeline. In this paper, we explore the
application of deep and recurrent neural networks to a continuous,
high-dimensional locomotion task, where the network is used to represent a
control policy that maps the state of the system (represented by joint angles)
directly to the torques at each joint. By using a recent reinforcement learning
algorithm called guided policy search, we can successfully train neural network
controllers with thousands of parameters, allowing us to compare a variety of
architectures. We discuss the differences between the locomotion control task
and previous supervised perception tasks, present experimental results
comparing various architectures, and discuss future directions in the
application of techniques from deep learning to the problem of optimal control.
| Sergey Levine | null | 1311.1761 | null | null |
Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks | cs.NE cs.LG stat.ML | In this paper we propose and investigate a novel nonlinear unit, called $L_p$
unit, for deep neural networks. The proposed $L_p$ unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized $L_p$ norm. We notice two interesting interpretations of the $L_p$
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the $L_p$ unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the $L_p$ unit is more
efficient at representing complex, nonlinear separating boundaries. Each $L_p$
unit defines a superelliptic boundary, with its exact shape defined by the
order $p$. We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few $L_p$ units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed $L_p$ units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the $L_p$
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed $L_p$ unit on the recently proposed deep
recurrent neural networks (RNN).
| Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu and Yoshua Bengio | null | 1311.1780 | null | null |
Optimization, Learning, and Games with Predictable Sequences | cs.LG cs.GT | We provide several applications of Optimistic Mirror Descent, an online
learning algorithm based on the idea of predictable sequences. First, we
recover the Mirror Prox algorithm for offline optimization, prove an extension
to Holder-smooth functions, and apply the results to saddle-point type
problems. Next, we prove that a version of Optimistic Mirror Descent (which has
a close relation to the Exponential Weights algorithm) can be used by two
strongly-uncoupled players in a finite zero-sum matrix game to converge to the
minimax equilibrium at the rate of O((log T)/T). This addresses a question of
Daskalakis et al 2011. Further, we consider a partial information version of
the problem. We then apply the results to convex programming and exhibit a
simple algorithm for the approximate Max Flow problem.
| Alexander Rakhlin and Karthik Sridharan | null | 1311.1869 | null | null |
Moment-based Uniform Deviation Bounds for $k$-means and Friends | cs.LG stat.ML | Suppose $k$ centers are fit to $m$ points by heuristically minimizing the
$k$-means cost; what is the corresponding fit over the source distribution?
This question is resolved here for distributions with $p\geq 4$ bounded
moments; in particular, the difference between the sample cost and distribution
cost decays with $m$ and $p$ as $m^{\min\{-1/4, -1/2+2/p\}}$. The essential
technical contribution is a mechanism to uniformly control deviations in the
face of unbounded parameter sets, cost functions, and source distributions. To
further demonstrate this mechanism, a soft clustering variant of $k$-means cost
is also considered, namely the log likelihood of a Gaussian mixture, subject to
the constraint that all covariance matrices have bounded spectrum. Lastly, a
rate with refined constants is provided for $k$-means instances possessing some
cluster structure.
| Matus Telgarsky, Sanjoy Dasgupta | null | 1311.1903 | null | null |
Constructing Time Series Shape Association Measures: Minkowski Distance
and Data Standardization | cs.LG | It is surprising that last two decades many works in time series data mining
and clustering were concerned with measures of similarity of time series but
not with measures of association that can be used for measuring possible direct
and inverse relationships between time series. Inverse relationships can exist
between dynamics of prices and sell volumes, between growth patterns of
competitive companies, between well production data in oilfields, between wind
velocity and air pollution concentration etc. The paper develops a theoretical
basis for analysis and construction of time series shape association measures.
Starting from the axioms of time series shape association measures it studies
the methods of construction of measures satisfying these axioms. Several
general methods of construction of such measures suitable for measuring time
series shape similarity and shape association are proposed. Time series shape
association measures based on Minkowski distance and data standardization
methods are considered. The cosine similarity and the Pearsons correlation
coefficient are obtained as particular cases of the proposed general methods
that can be used also for construction of new association measures in data
analysis.
| Ildar Batyrshin | null | 1311.1958 | null | null |
Risk-sensitive Reinforcement Learning | cs.LG | We derive a family of risk-sensitive reinforcement learning methods for
agents, who face sequential decision-making tasks in uncertain environments. By
applying a utility function to the temporal difference (TD) error, nonlinear
transformations are effectively applied not only to the received rewards but
also to the true transition probabilities of the underlying Markov decision
process. When appropriate utility functions are chosen, the agents' behaviors
express key features of human behavior as predicted by prospect theory
(Kahneman and Tversky, 1979), for example different risk-preferences for gains
and losses as well as the shape of subjective probability curves. We derive a
risk-sensitive Q-learning algorithm, which is necessary for modeling human
behavior when transition probabilities are unknown, and prove its convergence.
As a proof of principle for the applicability of the new framework we apply it
to quantify human behavior in a sequential investment task. We find, that the
risk-sensitive variant provides a significantly better fit to the behavioral
data and that it leads to an interpretation of the subject's responses which is
indeed consistent with prospect theory. The analysis of simultaneously measured
fMRI signals show a significant correlation of the risk-sensitive TD error with
BOLD signal change in the ventral striatum. In addition we find a significant
correlation of the risk-sensitive Q-values with neural activity in the
striatum, cingulate cortex and insula, which is not present if standard
Q-values are used.
| Yun Shen, Michael J. Tobia, Tobias Sommer, Klaus Obermayer | 10.1162/NECO_a_00600 | 1311.2097 | null | null |
Curvature and Optimal Algorithms for Learning and Minimizing Submodular
Functions | cs.DS cs.DM cs.LG | We investigate three related and important problems connected to machine
learning: approximating a submodular function everywhere, learning a submodular
function (in a PAC-like setting [53]), and constrained minimization of
submodular functions. We show that the complexity of all three problems depends
on the 'curvature' of the submodular function, and provide lower and upper
bounds that refine and improve previous results [3, 16, 18, 52]. Our proof
techniques are fairly generic. We either use a black-box transformation of the
function (for approximation and learning), or a transformation of algorithms to
use an appropriate surrogate function (for minimization). Curiously, curvature
has been known to influence approximations for submodular maximization [7, 55],
but its effect on minimization, approximation and learning has hitherto been
open. We complete this picture, and also support our theoretical claims by
empirical results.
| Rishabh Iyer, Stefanie Jegelka and Jeff Bilmes | null | 1311.2110 | null | null |
Fast large-scale optimization by unifying stochastic gradient and
quasi-Newton methods | cs.LG | We present an algorithm for minimizing a sum of functions that combines the
computational efficiency of stochastic gradient descent (SGD) with the second
order curvature information leveraged by quasi-Newton methods. We unify these
disparate approaches by maintaining an independent Hessian approximation for
each contributing function in the sum. We maintain computational tractability
and limit memory requirements even for high dimensional optimization problems
by storing and manipulating these quadratic approximations in a shared, time
evolving, low dimensional subspace. Each update step requires only a single
contributing function or minibatch evaluation (as in SGD), and each step is
scaled using an approximate inverse Hessian and little to no adjustment of
hyperparameters is required (as is typical for quasi-Newton methods). This
algorithm contrasts with earlier stochastic second order techniques that treat
the Hessian of each contributing function as a noisy approximation to the full
Hessian, rather than as a target for direct estimation. We experimentally
demonstrate improved convergence on seven diverse optimization problems. The
algorithm is released as open source Python and MATLAB packages.
| Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli | null | 1311.2115 | null | null |
A Structured Prediction Approach for Missing Value Imputation | cs.LG | Missing value imputation is an important practical problem. There is a large
body of work on it, but there does not exist any work that formulates the
problem in a structured output setting. Also, most applications have
constraints on the imputed data, for example on the distribution associated
with each variable. None of the existing imputation methods use these
constraints. In this paper we propose a structured output approach for missing
value imputation that also incorporates domain constraints. We focus on large
margin models, but it is easy to extend the ideas to probabilistic models. We
deal with the intractable inference step in learning via a piecewise training
technique that is simple, efficient, and effective. Comparison with existing
state-of-the-art and baseline imputation methods shows that our method gives
significantly improved performance on the Hamming loss measure.
| Rahul Kidambi, Vinod Nair, Sundararajan Sellamanickam, S. Sathiya
Keerthi | null | 1311.2137 | null | null |
Large Margin Semi-supervised Structured Output Learning | cs.LG | In structured output learning, obtaining labelled data for real-world
applications is usually costly, while unlabelled examples are available in
abundance. Semi-supervised structured classification has been developed to
handle large amounts of unlabelled structured data. In this work, we consider
semi-supervised structural SVMs with domain constraints. The optimization
problem, which in general is not convex, contains the loss terms associated
with the labelled and unlabelled examples along with the domain constraints. We
propose a simple optimization approach, which alternates between solving a
supervised learning problem and a constraint matching problem. Solving the
constraint matching problem is difficult for structured prediction, and we
propose an efficient and effective hill-climbing method to solve it. The
alternating optimization is carried out within a deterministic annealing
framework, which helps in effective constraint matching, and avoiding local
minima which are not very useful. The algorithm is simple to implement and
achieves comparable generalization performance on benchmark datasets.
| P. Balamurugan, Shirish Shevade, Sundararajan Sellamanickam | null | 1311.2139 | null | null |
Pattern-Coupled Sparse Bayesian Learning for Recovery of Block-Sparse
Signals | cs.IT cs.LG math.IT stat.ML | We consider the problem of recovering block-sparse signals whose structures
are unknown \emph{a priori}. Block-sparse signals with nonzero coefficients
occurring in clusters arise naturally in many practical scenarios. However, the
knowledge of the block structure is usually unavailable in practice. In this
paper, we develop a new sparse Bayesian learning method for recovery of
block-sparse signals with unknown cluster patterns. Specifically, a
pattern-coupled hierarchical Gaussian prior model is introduced to characterize
the statistical dependencies among coefficients, in which a set of
hyperparameters are employed to control the sparsity of signal coefficients.
Unlike the conventional sparse Bayesian learning framework in which each
individual hyperparameter is associated independently with each coefficient, in
this paper, the prior for each coefficient not only involves its own
hyperparameter, but also the hyperparameters of its immediate neighbors. In
doing this way, the sparsity patterns of neighboring coefficients are related
to each other and the hierarchical model has the potential to encourage
structured-sparse solutions. The hyperparameters, along with the sparse signal,
are learned by maximizing their posterior probability via an
expectation-maximization (EM) algorithm. Numerical results show that the
proposed algorithm presents uniform superiority over other existing methods in
a series of experiments.
| Jun Fang, Yanning Shen, Hongbin Li (IEEE), and Pu Wang | null | 1311.2150 | null | null |
FuSSO: Functional Shrinkage and Selection Operator | stat.ML cs.LG math.ST stat.TH | We present the FuSSO, a functional analogue to the LASSO, that efficiently
finds a sparse set of functional input covariates to regress a real-valued
response against. The FuSSO does so in a semi-parametric fashion, making no
parametric assumptions about the nature of input functional covariates and
assuming a linear form to the mapping of functional covariates to the response.
We provide a statistical backing for use of the FuSSO via proof of asymptotic
sparsistency under various conditions. Furthermore, we observe good results on
both synthetic and real-world data.
| Junier B. Oliva, Barnabas Poczos, Timothy Verstynen, Aarti Singh, Jeff
Schneider, Fang-Cheng Yeh, Wen-Yih Tseng | null | 1311.2234 | null | null |
Fast Distribution To Real Regression | stat.ML cs.LG math.ST stat.TH | We study the problem of distribution to real-value regression, where one aims
to regress a mapping $f$ that takes in a distribution input covariate $P\in
\mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and
outputs a real-valued response $Y=f(P) + \epsilon$. This setting was recently
studied, and a "Kernel-Kernel" estimator was introduced and shown to have a
polynomial rate of convergence. However, evaluating a new prediction with the
Kernel-Kernel estimator scales as $\Omega(N)$. This causes the difficult
situation where a large amount of data may be necessary for a low estimation
risk, but the computation cost of estimation becomes infeasible when the
data-set is too large. To this end, we propose the Double-Basis estimator,
which looks to alleviate this big data problem in two ways: first, the
Double-Basis estimator is shown to have a computation complexity that is
independent of the number of of instances $N$ when evaluating new predictions
after training; secondly, the Double-Basis estimator is shown to have a fast
rate of convergence for a general class of mappings $f\in\mathcal{F}$.
| Junier B. Oliva, Willie Neiswanger, Barnabas Poczos, Jeff Schneider,
Eric Xing | null | 1311.2236 | null | null |
Semantic Sort: A Supervised Approach to Personalized Semantic
Relatedness | cs.CL cs.LG | We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.
| Ran El-Yaniv and David Yanay | null | 1311.2252 | null | null |
More data speeds up training time in learning halfspaces over sparse
vectors | cs.LG | The increased availability of data in recent years has led several authors to
ask whether it is possible to use data as a {\em computational} resource. That
is, if more data is available, beyond the sample complexity limit, is it
possible to use the extra examples to speed up the computation time required to
perform the learning task?
We give the first positive answer to this question for a {\em natural
supervised learning problem} --- we consider agnostic PAC learning of
halfspaces over $3$-sparse vectors in $\{-1,1,0\}^n$. This class is
inefficiently learnable using $O\left(n/\epsilon^2\right)$ examples. Our main
contribution is a novel, non-cryptographic, methodology for establishing
computational-statistical gaps, which allows us to show that, under a widely
believed assumption that refuting random $\mathrm{3CNF}$ formulas is hard, it
is impossible to efficiently learn this class using only
$O\left(n/\epsilon^2\right)$ examples. We further show that under stronger
hardness assumptions, even $O\left(n^{1.499}/\epsilon^2\right)$ examples do not
suffice. On the other hand, we show a new algorithm that learns this class
efficiently using $\tilde{\Omega}\left(n^2/\epsilon^2\right)$ examples. This
formally establishes the tradeoff between sample and computational complexity
for a natural supervised learning problem.
| Amit Daniely, Nati Linial, Shai Shalev Shwartz | null | 1311.2271 | null | null |
From average case complexity to improper learning complexity | cs.LG cs.CC | The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from $\mathbf{NP}$-hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning $\mathrm{DNF}$'s is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of $\omega(1)$ halfspaces is hard.
| Amit Daniely, Nati Linial, Shai Shalev-Shwartz | null | 1311.2272 | null | null |
A Quantitative Evaluation Framework for Missing Value Imputation
Algorithms | cs.LG | We consider the problem of quantitatively evaluating missing value imputation
algorithms. Given a dataset with missing values and a choice of several
imputation algorithms to fill them in, there is currently no principled way to
rank the algorithms using a quantitative metric. We develop a framework based
on treating imputation evaluation as a problem of comparing two distributions
and show how it can be used to compute quantitative metrics. We present an
efficient procedure for applying this framework to practical datasets,
demonstrate several metrics derived from the existing literature on comparing
distributions, and propose a new metric called Neighborhood-based Dissimilarity
Score which is fast to compute and provides similar results. Results are shown
on several datasets, metrics, and imputations algorithms.
| Vinod Nair, Rahul Kidambi, Sundararajan Sellamanickam, S. Sathiya
Keerthi, Johannes Gehrke, Vijay Narayanan | null | 1311.2276 | null | null |
Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce | cs.LG | The kernel $k$-means is an effective method for data clustering which extends
the commonly-used $k$-means algorithm to work on a similarity matrix over
complex data structures. The kernel $k$-means algorithm is however
computationally very complex as it requires the complete data matrix to be
calculated and stored. Further, the kernelized nature of the kernel $k$-means
algorithm hinders the parallelization of its computations on modern
infrastructures for distributed computing. In this paper, we are defining a
family of kernel-based low-dimensional embeddings that allows for scaling
kernel $k$-means on MapReduce via an efficient and unified parallelization
strategy. Afterwards, we propose two methods for low-dimensional embedding that
adhere to our definition of the embedding family. Exploiting the proposed
parallelization strategy, we present two scalable MapReduce algorithms for
kernel $k$-means. We demonstrate the effectiveness and efficiency of the
proposed algorithms through an empirical evaluation on benchmark data sets.
| Ahmed Elgohary, Ahmed K. Farahat, Mohamed S. Kamel, Fakhri Karray | null | 1311.2334 | null | null |
An Empirical Evaluation of Sequence-Tagging Trainers | cs.LG | The task of assigning label sequences to a set of observed sequences is
common in computational linguistics. Several models for sequence labeling have
been proposed over the last few years. Here, we focus on discriminative models
for sequence labeling. Many batch and online (updating model parameters after
visiting each example) learning algorithms have been proposed in the
literature. On large datasets, online algorithms are preferred as batch
learning methods are slow. These online algorithms were designed to solve
either a primal or a dual problem. However, there has been no systematic
comparison of these algorithms in terms of their speed, generalization
performance (accuracy/likelihood) and their ability to achieve steady state
generalization performance fast. With this aim, we compare different algorithms
and make recommendations, useful for a practitioner. We conclude that the
selection of an algorithm for sequence labeling depends on the evaluation
criterion used and its implementation simplicity.
| P. Balamurugan, Shirish Shevade, S. Sundararajan and S. S Keerthi | null | 1311.2378 | null | null |
Global Sensitivity Analysis with Dependence Measures | math.ST cs.LG stat.ML stat.TH | Global sensitivity analysis with variance-based measures suffers from several
theoretical and practical limitations, since they focus only on the variance of
the output and handle multivariate variables in a limited way. In this paper,
we introduce a new class of sensitivity indices based on dependence measures
which overcomes these insufficiencies. Our approach originates from the idea to
compare the output distribution with its conditional counterpart when one of
the input variables is fixed. We establish that this comparison yields
previously proposed indices when it is performed with Csiszar f-divergences, as
well as sensitivity indices which are well-known dependence measures between
random variables. This leads us to investigate completely new sensitivity
indices based on recent state-of-the-art dependence measures, such as distance
correlation and the Hilbert-Schmidt independence criterion. We also emphasize
the potential of feature selection techniques relying on such dependence
measures as alternatives to screening in high dimension.
| S\'ebastien Da Veiga (IFPEN, - M\'ethodes d'Analyse Stochastique des
Codes et Traitements Num\'eriques) | null | 1311.2483 | null | null |
The Noisy Power Method: A Meta Algorithm with Applications | cs.DS cs.LG | We provide a new robust convergence analysis of the well-known power method
for computing the dominant singular vectors of a matrix that we call the noisy
power method. Our result characterizes the convergence behavior of the
algorithm when a significant amount noise is introduced after each
matrix-vector multiplication. The noisy power method can be seen as a
meta-algorithm that has recently found a number of important applications in a
broad range of machine learning problems including alternating minimization for
matrix completion, streaming principal component analysis (PCA), and
privacy-preserving spectral analysis. Our general analysis subsumes several
existing ad-hoc convergence bounds and resolves a number of open problems in
multiple applications including streaming PCA and privacy-preserving singular
vector computation.
| Moritz Hardt and Eric Price | null | 1311.2495 | null | null |
Predictable Feature Analysis | cs.LG stat.ML | Every organism in an environment, whether biological, robotic or virtual,
must be able to predict certain aspects of its environment in order to survive
or perform whatever task is intended. It needs a model that is capable of
estimating the consequences of possible actions, so that planning, control, and
decision-making become feasible. For scientific purposes, such models are
usually created in a problem specific manner using differential equations and
other techniques from control- and system-theory. In contrast to that, we aim
for an unsupervised approach that builds up the desired model in a
self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach
is to extract sub-signals from the input, that behave as predictable as
possible. These "predictable features" are highly relevant for modeling,
because predictability is a desired property of the needed
consequence-estimating model by definition. In our approach, we measure
predictability with respect to a certain prediction model. We focus here on the
solution of the arising optimization problem and present a tractable algorithm
based on algebraic methods which we call Predictable Feature Analysis (PFA). We
prove that the algorithm finds the globally optimal signal, if this signal can
be predicted with low error. To deal with cases where the optimal signal has a
significant prediction error, we provide a robust, heuristically motivated
variant of the algorithm and verify it empirically. Additionally, we give
formal criteria a prediction-model must meet to be suitable for measuring
predictability in the PFA setting and also provide a suitable default-model
along with a formal proof that it meets these criteria.
| Stefan Richthofer, Laurenz Wiskott | null | 1311.2503 | null | null |
Learning Mixtures of Linear Classifiers | cs.LG stat.ML | We consider a discriminative learning (regression) problem, whereby the
regression function is a convex combination of k linear classifiers. Existing
approaches are based on the EM algorithm, or similar techniques, without
provable guarantees. We develop a simple method based on spectral techniques
and a `mirroring' trick, that discovers the subspace spanned by the
classifiers' parameter vectors. Under a probabilistic assumption on the feature
vector distribution, we prove that this approach has nearly optimal statistical
efficiency.
| Yuekai Sun, Stratis Ioannidis, Andrea Montanari | null | 1311.2547 | null | null |
DinTucker: Scaling up Gaussian process models on multidimensional arrays
with billions of elements | cs.LG cs.DC stat.ML | Infinite Tucker Decomposition (InfTucker) and random function prior models,
as nonparametric Bayesian models on infinite exchangeable arrays, are more
powerful models than widely-used multilinear factorization methods including
Tucker and PARAFAC decomposition, (partly) due to their capability of modeling
nonlinear relationships between array elements. Despite their great predictive
performance and sound theoretical foundations, they cannot handle massive data
due to a prohibitively high training time. To overcome this limitation, we
present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor
decomposition algorithm on MAPREDUCE. While maintaining the predictive accuracy
of InfTucker, it is scalable on massive data. DINTUCKER is based on a new
hierarchical Bayesian model that enables local training of InfTucker on
subarrays and information integration from all local training results. We use
distributed stochastic gradient descent, coupled with variational inference, to
train this model. We apply DINTUCKER to multidimensional arrays with billions
of elements from applications in the "Read the Web" project (Carlson et al.,
2010) and in information security and compare it with the state-of-the-art
large-scale tensor decomposition method, GigaTensor. On both datasets,
DINTUCKER achieves significantly higher prediction accuracy with less
computational time.
| Shandian Zhe and Yuan Qi and Youngja Park and Ian Molloy and Suresh
Chari | null | 1311.2663 | null | null |
Sampling Based Approaches to Handle Imbalances in Network Traffic
Dataset for Machine Learning Techniques | cs.NI cs.CR cs.LG | Network traffic data is huge, varying and imbalanced because various classes
are not equally distributed. Machine learning (ML) algorithms for traffic
analysis uses the samples from this data to recommend the actions to be taken
by the network administrators as well as training. Due to imbalances in
dataset, it is difficult to train machine learning algorithms for traffic
analysis and these may give biased or false results leading to serious
degradation in performance of these algorithms. Various techniques can be
applied during sampling to minimize the effect of imbalanced instances. In this
paper various sampling techniques have been analysed in order to compare the
decrease in variation in imbalances of network traffic datasets sampled for
these algorithms. Various parameters like missing classes in samples,
probability of sampling of the different instances have been considered for
comparison.
| Raman Singh, Harish Kumar and R.K. Singla | 10.5121/csit.2013.3704 | 1311.2677 | null | null |
Hypothesis Testing for Automated Community Detection in Networks | stat.ML cs.LG cs.SI math.ST physics.soc-ph stat.TH | Community detection in networks is a key exploratory tool with applications
in a diverse set of areas, ranging from finding communities in social and
biological networks to identifying link farms in the World Wide Web. The
problem of finding communities or clusters in a network has received much
attention from statistics, physics and computer science. However, most
clustering algorithms assume knowledge of the number of clusters k. In this
paper we propose to automatically determine k in a graph generated from a
Stochastic Blockmodel. Our main contribution is twofold; first, we
theoretically establish the limiting distribution of the principal eigenvalue
of the suitably centered and scaled adjacency matrix, and use that distribution
for our hypothesis test. Secondly, we use this test to design a recursive
bipartitioning algorithm. Using quantifiable classification tasks on real world
networks with ground truth, we show that our algorithm outperforms existing
probabilistic models for learning overlapping clusters, and on unlabeled
networks, we show that we uncover nested community structure.
| Peter J. Bickel, Purnamrita Sarkar | null | 1311.2694 | null | null |
Deep neural networks for single channel source separation | cs.NE cs.LG | In this paper, a novel approach for single channel source separation (SCSS)
using a deep neural network (DNN) architecture is introduced. Unlike previous
studies in which DNN and other classifiers were used for classifying
time-frequency bins to obtain hard masks for each source, we use the DNN to
classify estimated source spectra to check for their validity during
separation. In the training stage, the training data for the source signals are
used to train a DNN. In the separation stage, the trained DNN is utilized to
aid in estimation of each source in the mixed signal. Single channel source
separation problem is formulated as an energy minimization problem where each
source spectra estimate is encouraged to fit the trained DNN model and the
mixed signal spectrum is encouraged to be written as a weighted sum of the
estimated source spectra. The proposed approach works regardless of the energy
scale differences between the source signals in the training and separation
stages. Nonnegative matrix factorization (NMF) is used to initialize the DNN
estimate for each source. The experimental results show that using DNN
initialized by NMF for source separation improves the quality of the separated
signal compared with using NMF for source separation.
| Emad M. Grais, Mehmet Umut Sen, Hakan Erdogan | null | 1311.2746 | null | null |
Aggregation of Affine Estimators | math.ST cs.LG stat.TH | We consider the problem of aggregating a general collection of affine
estimators for fixed design regression. Relevant examples include some commonly
used statistical estimators such as least squares, ridge and robust least
squares estimators. Dalalyan and Salmon (2012) have established that, for this
problem, exponentially weighted (EW) model selection aggregation leads to sharp
oracle inequalities in expectation, but similar bounds in deviation were not
previously known. While results indicate that the same aggregation scheme may
not satisfy sharp oracle inequalities with high probability, we prove that a
weaker notion of oracle inequality for EW that holds with high probability.
Moreover, using a generalization of the newly introduced $Q$-aggregation scheme
we also prove sharp oracle inequalities that hold with high probability.
Finally, we apply our results to universal aggregation and show that our
proposed estimator leads simultaneously to all the best known bounds for
aggregation, including $\ell_q$-aggregation, $q \in (0,1)$, with high
probability.
| Dong Dai, Philippe Rigollet, Lucy Xia and Tong Zhang | null | 1311.2799 | null | null |
A PAC-Bayesian bound for Lifelong Learning | stat.ML cs.LG | Transfer learning has received a lot of attention in the machine learning
community over the last years, and several effective algorithms have been
developed. However, relatively little is known about their theoretical
properties, especially in the setting of lifelong learning, where the goal is
to transfer information to tasks for which no data have been observed so far.
In this work we study lifelong learning from a theoretical perspective. Our
main result is a PAC-Bayesian generalization bound that offers a unified view
on existing paradigms for transfer learning, such as the transfer of parameters
or the transfer of low-dimensional representations. We also use the bound to
derive two principled lifelong learning algorithms, and we show that these
yield results comparable with existing methods.
| Anastasia Pentina and Christoph H. Lampert | null | 1311.2838 | null | null |
Spectral Clustering via the Power Method -- Provably | cs.LG cs.NA | Spectral clustering is one of the most important algorithms in data mining
and machine intelligence; however, its computational complexity limits its
application to truly large scale data analysis. The computational bottleneck in
spectral clustering is computing a few of the top eigenvectors of the
(normalized) Laplacian matrix corresponding to the graph representing the data
to be clustered. One way to speed up the computation of these eigenvectors is
to use the "power method" from the numerical linear algebra literature.
Although the power method has been empirically used to speed up spectral
clustering, the theory behind this approach, to the best of our knowledge,
remains unexplored. This paper provides the \emph{first} such rigorous
theoretical justification, arguing that a small number of power iterations
suffices to obtain near-optimal partitionings using the approximate
eigenvectors. Specifically, we prove that solving the $k$-means clustering
problem on the approximate eigenvectors obtained via the power method gives an
additive-error approximation to solving the $k$-means problem on the optimal
eigenvectors.
| Christos Boutsidis and Alex Gittens and Prabhanjan Kambadur | null | 1311.2854 | null | null |
Reinforcement Learning for Matrix Computations: PageRank as an Example | cs.LG cs.SI stat.ML | Reinforcement learning has gained wide popularity as a technique for
simulation-driven approximate dynamic programming. A less known aspect is that
the very reasons that make it effective in dynamic programming can also be
leveraged for using it for distributed schemes for certain matrix computations
involving non-negative matrices. In this spirit, we propose a reinforcement
learning algorithm for PageRank computation that is fashioned after analogous
schemes for approximate dynamic programming. The algorithm has the advantage of
ease of distributed implementation and more importantly, of being model-free,
i.e., not dependent on any specific assumptions about the transition
probabilities in the random web-surfer model. We analyze its convergence and
finite time behavior and present some supporting numerical experiments.
| Vivek S. Borkar and Adwaitvedant S. Mathkar | null | 1311.2889 | null | null |
The More, the Merrier: the Blessing of Dimensionality for Learning Large
Gaussian Mixtures | cs.LG cs.DS stat.ML | In this paper we show that very large mixtures of Gaussians are efficiently
learnable in high dimension. More precisely, we prove that a mixture with known
identical covariance matrices whose number of components is a polynomial of any
fixed degree in the dimension n is polynomially learnable as long as a certain
non-degeneracy condition on the means is satisfied. It turns out that this
condition is generic in the sense of smoothed complexity, as soon as the
dimensionality of the space is high enough. Moreover, we prove that no such
condition can possibly exist in low dimension and the problem of learning the
parameters is generically hard. In contrast, much of the existing work on
Gaussian Mixtures relies on low-dimensional projections and thus hits an
artificial barrier. Our main result on mixture recovery relies on a new
"Poissonization"-based technique, which transforms a mixture of Gaussians to a
linear map of a product distribution. The problem of learning this map can be
efficiently solved using some recent results on tensor decompositions and
Independent Component Analysis (ICA), thus giving an algorithm for recovering
the mixture. In addition, we combine our low-dimensional hardness results for
Gaussian mixtures with Poissonization to show how to embed difficult instances
of low-dimensional Gaussian mixtures into the ICA setting, thus establishing
exponential information-theoretic lower bounds for underdetermined ICA in low
dimension. To the best of our knowledge, this is the first such result in the
literature. In addition to contributing to the problem of Gaussian mixture
learning, we believe that this work is among the first steps toward better
understanding the rare phenomenon of the "blessing of dimensionality" in the
computational aspects of statistical inference.
| Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher, James
Voss | null | 1311.2891 | null | null |
Approximate Inference in Continuous Determinantal Point Processes | stat.ML cs.LG stat.ME | Determinantal point processes (DPPs) are random point processes well-suited
for modeling repulsion. In machine learning, the focus of DPP-based models has
been on diverse subset selection from a discrete and finite base set. This
discrete setting admits an efficient sampling algorithm based on the
eigendecomposition of the defining kernel matrix. Recently, there has been
growing interest in using DPPs defined on continuous spaces. While the
discrete-DPP sampler extends formally to the continuous case, computationally,
the steps required are not tractable in general. In this paper, we present two
efficient DPP sampling schemes that apply to a wide range of kernel functions:
one based on low rank approximations via Nystrom and random Fourier feature
techniques and another based on Gibbs sampling. We demonstrate the utility of
continuous DPPs in repulsive mixture modeling and synthesizing human poses
spanning activity spaces.
| Raja Hafiz Affandi, Emily B. Fox, Ben Taskar | null | 1311.2971 | null | null |
Learning Mixtures of Discrete Product Distributions using Spectral
Decompositions | stat.ML cs.CC cs.IT cs.LG math.IT | We study the problem of learning a distribution from samples, when the
underlying distribution is a mixture of product distributions over discrete
domains. This problem is motivated by several practical applications such as
crowd-sourcing, recommendation systems, and learning Boolean functions. The
existing solutions either heavily rely on the fact that the number of
components in the mixtures is finite or have sample/time complexity that is
exponential in the number of components. In this paper, we introduce a
polynomial time/sample complexity method for learning a mixture of $r$ discrete
product distributions over $\{1, 2, \dots, \ell\}^n$, for general $\ell$ and
$r$. We show that our approach is statistically consistent and further provide
finite sample guarantees.
We use techniques from the recent work on tensor decompositions for
higher-order moment matching. A crucial step in these moment matching methods
is to construct a certain matrix and a certain tensor with low-rank spectral
decompositions. These tensors are typically estimated directly from the
samples. The main challenge in learning mixtures of discrete product
distributions is that these low-rank tensors cannot be obtained directly from
the sample moments. Instead, we reduce the tensor estimation problem to: $a$)
estimating a low-rank matrix using only off-diagonal block elements; and $b$)
estimating a tensor using a small number of linear measurements. Leveraging on
recent developments in matrix completion, we give an alternating minimization
based method to estimate the low-rank matrix, and formulate the tensor
completion problem as a least-squares problem.
| Prateek Jain and Sewoong Oh | null | 1311.2972 | null | null |
Learning Input and Recurrent Weight Matrices in Echo State Networks | cs.LG | Echo State Networks (ESNs) are a special type of the temporally deep network
model, the Recurrent Neural Network (RNN), where the recurrent matrix is
carefully designed and both the recurrent and input matrices are fixed. An ESN
uses the linearity of the activation function of the output units to simplify
the learning of the output matrix. In this paper, we devise a special technique
that take advantage of this linearity in the output units of an ESN, to learn
the input and recurrent matrices. This has not been done in earlier ESNs due to
their well known difficulty in learning those matrices. Compared to the
technique of BackPropagation Through Time (BPTT) in learning general RNNs, our
proposed method exploits linearity of activation function in the output units
to formulate the relationships amongst the various matrices in an RNN. These
relationships results in the gradient of the cost function having an analytical
form and being more accurate. This would enable us to compute the gradients
instead of obtaining them by recursion as in BPTT. Experimental results on
phone state classification show that learning one or both the input and
recurrent matrices in an ESN yields superior results compared to traditional
ESNs that do not learn these matrices, especially when longer time steps are
used.
| Hamid Palangi, Li Deng, Rabab K Ward | null | 1311.2987 | null | null |
Informed Source Separation: A Bayesian Tutorial | stat.ML cs.LG | Source separation problems are ubiquitous in the physical sciences; any
situation where signals are superimposed calls for source separation to
estimate the original signals. In this tutorial I will discuss the Bayesian
approach to the source separation problem. This approach has a specific
advantage in that it requires the designer to explicitly describe the signal
model in addition to any other information or assumptions that go into the
problem description. This leads naturally to the idea of informed source
separation, where the algorithm design incorporates relevant information about
the specific problem. This approach promises to enable researchers to design
their own high-quality algorithms that are specifically tailored to the problem
at hand.
| Kevin H. Knuth | null | 1311.3001 | null | null |
Multiple Closed-Form Local Metric Learning for K-Nearest Neighbor
Classifier | cs.LG | Many researches have been devoted to learn a Mahalanobis distance metric,
which can effectively improve the performance of kNN classification. Most
approaches are iterative and computational expensive and linear rigidity still
critically limits metric learning algorithm to perform better. We proposed a
computational economical framework to learn multiple metrics in closed-form.
| Jianbo Ye | null | 1311.3157 | null | null |
Nonparametric Estimation of Multi-View Latent Variable Models | cs.LG stat.ML | Spectral methods have greatly advanced the estimation of latent variable
models, generating a sequence of novel and efficient algorithms with strong
theoretical guarantees. However, current spectral algorithms are largely
restricted to mixtures of discrete or Gaussian distributions. In this paper, we
propose a kernel method for learning multi-view latent variable models,
allowing each mixture component to be nonparametric. The key idea of the method
is to embed the joint distribution of a multi-view latent variable into a
reproducing kernel Hilbert space, and then the latent parameters are recovered
using a robust tensor power method. We establish that the sample complexity for
the proposed method is quadratic in the number of latent components and is a
low order polynomial in the other relevant parameters. Thus, our non-parametric
tensor approach to learning latent variable models enjoys good sample and
computational efficiencies. Moreover, the non-parametric tensor power method
compares favorably to EM algorithm and other existing spectral algorithms in
our experiments.
| Le Song, Animashree Anandkumar, Bo Dai, Bo Xie | null | 1311.3287 | null | null |
Sparse Matrix Factorization | cs.LG stat.ML | We investigate the problem of factorizing a matrix into several sparse
matrices and propose an algorithm for this under randomness and sparsity
assumptions. This problem can be viewed as a simplification of the deep
learning problem where finding a factorization corresponds to finding edges in
different layers and values of hidden units. We prove that under certain
assumptions for a sparse linear deep network with $n$ nodes in each layer, our
algorithm is able to recover the structure of the network and values of top
layer hidden units for depths up to $\tilde O(n^{1/6})$. We further discuss the
relation among sparse matrix factorization, deep learning, sparse recovery and
dictionary learning.
| Behnam Neyshabur, Rina Panigrahy | null | 1311.3315 | null | null |
Anytime Belief Propagation Using Sparse Domains | stat.ML cs.AI cs.LG | Belief Propagation has been widely used for marginal inference, however it is
slow on problems with large-domain variables and high-order factors. Previous
work provides useful approximations to facilitate inference on such models, but
lacks important anytime properties such as: 1) providing accurate and
consistent marginals when stopped early, 2) improving the approximation when
run longer, and 3) converging to the fixed point of BP. To this end, we propose
a message passing algorithm that works on sparse (partially instantiated)
domains, and converges to consistent marginals using dynamic message
scheduling. The algorithm grows the sparse domains incrementally, selecting the
next value to add using prioritization schemes based on the gradients of the
marginal inference objective. Our experiments demonstrate local anytime
consistency and fast convergence, providing significant speedups over BP to
obtain low-error marginals: up to 25 times on grid models, and up to 6 times on
a real-world natural language processing task.
| Sameer Singh and Sebastian Riedel and Andrew McCallum | null | 1311.3368 | null | null |
Fundamental Limits of Online and Distributed Algorithms for Statistical
Learning and Estimation | cs.LG stat.ML | Many machine learning approaches are characterized by information constraints
on how they interact with the training data. These include memory and
sequential access constraints (e.g. fast first-order methods to solve
stochastic optimization problems); communication constraints (e.g. distributed
learning); partial access to the underlying data (e.g. missing features and
multi-armed bandits) and more. However, currently we have little understanding
how such information constraints fundamentally affect our performance,
independent of the learning problem semantics. For example, are there learning
problems where any algorithm which has small memory footprint (or can use any
bounded number of bits from each example, or has certain communication
constraints) will perform worse than what is possible without such constraints?
In this paper, we describe how a single set of results implies positive answers
to the above, for several different settings.
| Ohad Shamir | null | 1311.3494 | null | null |
Smoothed Analysis of Tensor Decompositions | cs.DS cs.LG stat.ML | Low rank tensor decompositions are a powerful tool for learning generative
models, and uniqueness results give them a significant advantage over matrix
decomposition methods. However, tensors pose significant algorithmic challenges
and tensors analogs of much of the matrix algebra toolkit are unlikely to exist
because of hardness results. Efficient decomposition in the overcomplete case
(where rank exceeds dimension) is particularly challenging. We introduce a
smoothed analysis model for studying these questions and develop an efficient
algorithm for tensor decomposition in the highly overcomplete case (rank
polynomial in the dimension). In this setting, we show that our algorithm is
robust to inverse polynomial error -- a crucial property for applications in
learning since we are only allowed a polynomial number of samples. While
algorithms are known for exact tensor decomposition in some overcomplete
settings, our main contribution is in analyzing their stability in the
framework of smoothed analysis.
Our main technical contribution is to show that tensor products of perturbed
vectors are linearly independent in a robust sense (i.e. the associated matrix
has singular values that are at least an inverse polynomial). This key result
paves the way for applying tensor methods to learning problems in the smoothed
setting. In particular, we use it to obtain results for learning multi-view
models and mixtures of axis-aligned Gaussians where there are many more
"components" than dimensions. The assumption here is that the model is not
adversarially chosen, formalized by a perturbation of model parameters. We
believe this an appealing way to analyze realistic instances of learning
problems, since this framework allows us to overcome many of the usual
limitations of using tensor methods.
| Aditya Bhaskara, Moses Charikar, Ankur Moitra and Aravindan
Vijayaraghavan | null | 1311.3651 | null | null |
Scalable Influence Estimation in Continuous-Time Diffusion Networks | cs.SI cs.LG | If a piece of information is released from a media site, can it spread, in 1
month, to a million web pages? This influence estimation problem is very
challenging since both the time-sensitive nature of the problem and the issue
of scalability need to be addressed simultaneously. In this paper, we propose a
randomized algorithm for influence estimation in continuous-time diffusion
networks. Our algorithm can estimate the influence of every node in a network
with |V| nodes and |E| edges to an accuracy of $\varepsilon$ using
$n=O(1/\varepsilon^2)$ randomizations and up to logarithmic factors
O(n|E|+n|V|) computations. When used as a subroutine in a greedy influence
maximization algorithm, our proposed method is guaranteed to find a set of
nodes with an influence of at least (1-1/e)OPT-2$\varepsilon$, where OPT is the
optimal value. Experiments on both synthetic and real-world data show that the
proposed method can easily scale up to networks of millions of nodes while
significantly improves over previous state-of-the-arts in terms of the accuracy
of the estimated influence and the quality of the selected nodes in maximizing
the influence.
| Nan Du, Le Song, Manuel Gomez Rodriguez, Hongyuan Zha | null | 1311.3669 | null | null |
Ensemble Relational Learning based on Selective Propositionalization | cs.LG cs.AI | Dealing with structured data needs the use of expressive representation
formalisms that, however, puts the problem to deal with the computational
complexity of the machine learning process. Furthermore, real world domains
require tools able to manage their typical uncertainty. Many statistical
relational learning approaches try to deal with these problems by combining the
construction of relevant relational features with a probabilistic tool. When
the combination is static (static propositionalization), the constructed
features are considered as boolean features and used offline as input to a
statistical learner; while, when the combination is dynamic (dynamic
propositionalization), the feature construction and probabilistic tool are
combined into a single process. In this paper we propose a selective
propositionalization method that search the optimal set of relational features
to be used by a probabilistic learner in order to minimize a loss function. The
new propositionalization approach has been combined with the random subspace
ensemble method. Experiments on real-world datasets shows the validity of the
proposed method.
| Nicola Di Mauro and Floriana Esposito | null | 1311.3735 | null | null |
Mapping cognitive ontologies to and from the brain | stat.ML cs.LG q-bio.NC | Imaging neuroscience links brain activation maps to behavior and cognition
via correlational studies. Due to the nature of the individual experiments,
based on eliciting neural response from a small number of stimuli, this link is
incomplete, and unidirectional from the causal point of view. To come to
conclusions on the function implied by the activation of brain regions, it is
necessary to combine a wide exploration of the various brain functions and some
inversion of the statistical inference. Here we introduce a methodology for
accumulating knowledge towards a bidirectional link between observed brain
activity and the corresponding function. We rely on a large corpus of imaging
studies and a predictive engine. Technically, the challenges are to find
commonality between the studies without denaturing the richness of the corpus.
The key elements that we contribute are labeling the tasks performed with a
cognitive ontology, and modeling the long tail of rare paradigms in the corpus.
To our knowledge, our approach is the first demonstration of predicting the
cognitive content of completely new brain images. To that end, we propose a
method that predicts the experimental paradigms across different studies.
| Yannick Schwartz (INRIA Saclay - Ile de France, NEUROSPIN), Bertrand
Thirion (INRIA Saclay - Ile de France, NEUROSPIN), Ga\"el Varoquaux (INRIA
Saclay - Ile de France, LNAO) | null | 1311.3859 | null | null |
Clustering Markov Decision Processes For Continual Transfer | cs.AI cs.LG | We present algorithms to effectively represent a set of Markov decision
processes (MDPs), whose optimal policies have already been learned, by a
smaller source subset for lifelong, policy-reuse-based transfer learning in
reinforcement learning. This is necessary when the number of previous tasks is
large and the cost of measuring similarity counteracts the benefit of transfer.
The source subset forms an `$\epsilon$-net' over the original set of MDPs, in
the sense that for each previous MDP $M_p$, there is a source $M^s$ whose
optimal policy has $<\epsilon$ regret in $M_p$. Our contributions are as
follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that
optimally reuses a given source policy set when learning for a new MDP. We
present a framework to cluster the previous MDPs to extract a source subset.
The framework consists of (i) a distance $d_V$ over MDPs to measure
policy-based similarity between MDPs; (ii) a cost function $g(\cdot)$ that uses
$d_V$ to measure how good a particular clustering is for generating useful
source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm,
MHAV, for finding the optimal clustering. We validate our algorithms through
experiments in a surveillance domain.
| M. M. Hassan Mahmud, Majd Hawasly, Benjamin Rosman, Subramanian
Ramamoorthy | null | 1311.3959 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.